2026-04-01T09:49:20.071 INFO:root:teuthology version: 1.2.4.dev37+ga59626679 2026-04-01T09:49:20.077 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-04-01T09:49:20.101 INFO:teuthology.run:Config: archive_path: /archive/supriti-2026-04-01_09:45:36-rgw-wip-sse-s3-on-v20.2.0-none-default-vps/4721 branch: wip-sse-s3-on-v20.2.0 description: rgw/dedup/{beast bluestore-bitmap fixed-3-rgw ignore-pg-availability overrides supported-distros/{rocky_latest} tasks/{0-install test_dedup}} email: null first_in_suite: false flavor: default job_id: '4721' last_in_suite: false machine_type: vps name: supriti-2026-04-01_09:45:36-rgw-wip-sse-s3-on-v20.2.0-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: rocky os_version: '9.7' overrides: admin_socket: branch: wip-sse-s3-on-v20.2.0 ansible.cephlab: branch: main repo: https://github.com/kshtsk/ceph-cm-ansible.git skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: logical_volumes: lv_1: scratch_dev: true size: 25%VG vg: vg_nvme lv_2: scratch_dev: true size: 25%VG vg: vg_nvme lv_3: scratch_dev: true size: 25%VG vg: vg_nvme lv_4: scratch_dev: true size: 25%VG vg: vg_nvme timezone: UTC volume_groups: vg_nvme: pvs: /dev/vdb,/dev/vdc,/dev/vdd,/dev/vde ceph: conf: client: debug rgw: 20 debug rgw dedup: 20 setgroup: ceph setuser: ceph global: osd_max_pg_log_entries: 10 osd_min_pg_log_entries: 10 mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: bdev async discard: true bdev enable discard: true bluestore allocator: bitmap bluestore block size: 96636764160 bluestore fsck on mount: true debug bluefs: 1/20 debug bluestore: 1/20 debug ms: 1 debug osd: 20 debug rocksdb: 4/10 mon osd backfillfull_ratio: 0.85 mon osd full ratio: 0.9 mon osd nearfull ratio: 0.8 osd failsafe full ratio: 0.95 osd mclock iops capacity threshold hdd: 49000 osd objectstore: bluestore osd shutdown pgref assert: true flavor: default fs: xfs log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - \(PG_AVAILABILITY\) - \(PG_DEGRADED\) - \(POOL_APP_NOT_ENABLED\) - not have an application enabled sha1: 0597158282e6d69429e60df2354a6c8eed0e5bce ceph-deploy: bluestore: true conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} osd: bdev async discard: true bdev enable discard: true bluestore block size: 96636764160 bluestore fsck on mount: true debug bluefs: 1/20 debug bluestore: 1/20 debug rocksdb: 4/10 mon osd backfillfull_ratio: 0.85 mon osd full ratio: 0.9 mon osd nearfull ratio: 0.8 osd failsafe full ratio: 0.95 osd objectstore: bluestore fs: xfs cephadm: cephadm_binary_url: https://download.ceph.com/rpm-20.2.0/el9/noarch/cephadm containers: image: harbor.clyso.com/custom-ceph/ceph/ceph:sse-s3-kmip-preview-not-for-production-1 install: ceph: flavor: default sha1: 0597158282e6d69429e60df2354a6c8eed0e5bce extra_system_packages: deb: - python3-jmespath - python3-xmltodict - s3cmd rpm: - bzip2 - perl-Test-Harness - python3-jmespath - python3-xmltodict - s3cmd repos: - name: ceph-source priority: 1 url: https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/SRPMS - name: ceph-noarch priority: 1 url: https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/noarch - name: ceph priority: 1 url: https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/x86_64 rgw: frontend: beast storage classes: FROZEN: null LUKEWARM: null s3tests: sha1: e0c4ff71baef6d5126a0201df5fe54196d89b296 selinux: allowlist: - scontext=system_u:system_r:getty_t:s0 thrashosds: bdev_inject_crash: 2 bdev_inject_crash_probability: 0.5 workunit: branch: tt-20.2.0-sse-s3-kmip-preview-not-for-production-1 sha1: 99e8bef8f767b591604d6078b7861a00c2936d53 owner: supriti priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mon.c - mgr.y - osd.0 - osd.1 - osd.2 - osd.3 - client.0 - - mon.b - mgr.x - osd.4 - osd.5 - osd.6 - osd.7 - client.1 - - client.2 seed: 3517 sha1: 0597158282e6d69429e60df2354a6c8eed0e5bce sleep_before_teardown: 0 suite: rgw suite_branch: tt-20.2.0-sse-s3-kmip-preview-not-for-production-1 suite_path: /home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa suite_relpath: qa suite_repo: http://git.local/ceph.git suite_sha1: 99e8bef8f767b591604d6078b7861a00c2936d53 targets: vm00.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMKaBYra0YQC0r5v8PJSnyELq+uBASa/JHP0hVqf/Gsj+uDFuUIdK2PWVf4v0w5+FvinmM7yTymEwx+d5MrPTjY= vm03.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNz/Yckkc47y4b9KfD/sbYbvn8ajojOuiJI63PpsGF6L44sz0OnCf10skNklPBbSuXi8nEP566fiU6LHwxIwWU8= vm07.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDbCrGZhGhUeI32QJ9fDw+VpIAPWsJvEnXsdQ9i3JuGSSIFA8JI84/0XrQ/jtdCEiLWrsi2zHpOSYEytCsN9Y7o= tasks: - install: null - ceph: null - openssl_keys: null - rgw: - client.0 - client.1 - client.2 - tox: - client.0 - tox: - client.0 - dedup-tests: client.0: rgw_server: client.0 teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: uv2 teuthology_repo: https://github.com/kshtsk/teuthology teuthology_sha1: a59626679648f962bca99d20d35578f2998c8f37 timestamp: 2026-04-01_09:45:36 tube: vps user: supriti verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.282426 2026-04-01T09:49:20.101 INFO:teuthology.run:suite_path is set to /home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa; will attempt to use it 2026-04-01T09:49:20.101 INFO:teuthology.run:Found tasks at /home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks 2026-04-01T09:49:20.102 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-04-01T09:49:20.102 INFO:teuthology.task.internal:Saving configuration 2026-04-01T09:49:20.137 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-04-01T09:49:20.138 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-04-01T09:49:20.145 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm00.local', 'description': '/archive/supriti-2026-04-01_09:45:36-rgw-wip-sse-s3-on-v20.2.0-none-default-vps/4721', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'rocky', 'os_version': '9.7', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-04-01 09:47:51.736497', 'locked_by': 'supriti', 'mac_address': '52:55:00:00:00:00', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMKaBYra0YQC0r5v8PJSnyELq+uBASa/JHP0hVqf/Gsj+uDFuUIdK2PWVf4v0w5+FvinmM7yTymEwx+d5MrPTjY='} 2026-04-01T09:49:20.151 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm03.local', 'description': '/archive/supriti-2026-04-01_09:45:36-rgw-wip-sse-s3-on-v20.2.0-none-default-vps/4721', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'rocky', 'os_version': '9.7', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-04-01 09:47:51.736160', 'locked_by': 'supriti', 'mac_address': '52:55:00:00:00:03', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNz/Yckkc47y4b9KfD/sbYbvn8ajojOuiJI63PpsGF6L44sz0OnCf10skNklPBbSuXi8nEP566fiU6LHwxIwWU8='} 2026-04-01T09:49:20.156 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm07.local', 'description': '/archive/supriti-2026-04-01_09:45:36-rgw-wip-sse-s3-on-v20.2.0-none-default-vps/4721', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'rocky', 'os_version': '9.7', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-04-01 09:47:51.735096', 'locked_by': 'supriti', 'mac_address': '52:55:00:00:00:07', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDbCrGZhGhUeI32QJ9fDw+VpIAPWsJvEnXsdQ9i3JuGSSIFA8JI84/0XrQ/jtdCEiLWrsi2zHpOSYEytCsN9Y7o='} 2026-04-01T09:49:20.157 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-04-01T09:49:20.157 INFO:teuthology.task.internal:roles: ubuntu@vm00.local - ['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0'] 2026-04-01T09:49:20.157 INFO:teuthology.task.internal:roles: ubuntu@vm03.local - ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1'] 2026-04-01T09:49:20.157 INFO:teuthology.task.internal:roles: ubuntu@vm07.local - ['client.2'] 2026-04-01T09:49:20.157 INFO:teuthology.run_tasks:Running task console_log... 2026-04-01T09:49:20.163 DEBUG:teuthology.task.console_log:vm00 does not support IPMI; excluding 2026-04-01T09:49:20.168 DEBUG:teuthology.task.console_log:vm03 does not support IPMI; excluding 2026-04-01T09:49:20.174 DEBUG:teuthology.task.console_log:vm07 does not support IPMI; excluding 2026-04-01T09:49:20.174 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7feced955240>, signals=[15]) 2026-04-01T09:49:20.174 INFO:teuthology.run_tasks:Running task internal.connect... 2026-04-01T09:49:20.174 INFO:teuthology.task.internal:Opening connections... 2026-04-01T09:49:20.174 DEBUG:teuthology.task.internal:connecting to ubuntu@vm00.local 2026-04-01T09:49:20.175 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-04-01T09:49:20.239 DEBUG:teuthology.task.internal:connecting to ubuntu@vm03.local 2026-04-01T09:49:20.239 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-04-01T09:49:20.299 DEBUG:teuthology.task.internal:connecting to ubuntu@vm07.local 2026-04-01T09:49:20.300 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm07.local', 'username': 'ubuntu', 'timeout': 60} 2026-04-01T09:49:20.362 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-04-01T09:49:20.364 DEBUG:teuthology.orchestra.run.vm00:> uname -m 2026-04-01T09:49:20.381 INFO:teuthology.orchestra.run.vm00.stdout:x86_64 2026-04-01T09:49:20.381 DEBUG:teuthology.orchestra.run.vm00:> cat /etc/os-release 2026-04-01T09:49:20.439 INFO:teuthology.orchestra.run.vm00.stdout:NAME="Rocky Linux" 2026-04-01T09:49:20.439 INFO:teuthology.orchestra.run.vm00.stdout:VERSION="9.7 (Blue Onyx)" 2026-04-01T09:49:20.439 INFO:teuthology.orchestra.run.vm00.stdout:ID="rocky" 2026-04-01T09:49:20.439 INFO:teuthology.orchestra.run.vm00.stdout:ID_LIKE="rhel centos fedora" 2026-04-01T09:49:20.439 INFO:teuthology.orchestra.run.vm00.stdout:VERSION_ID="9.7" 2026-04-01T09:49:20.439 INFO:teuthology.orchestra.run.vm00.stdout:PLATFORM_ID="platform:el9" 2026-04-01T09:49:20.439 INFO:teuthology.orchestra.run.vm00.stdout:PRETTY_NAME="Rocky Linux 9.7 (Blue Onyx)" 2026-04-01T09:49:20.439 INFO:teuthology.orchestra.run.vm00.stdout:ANSI_COLOR="0;32" 2026-04-01T09:49:20.439 INFO:teuthology.orchestra.run.vm00.stdout:LOGO="fedora-logo-icon" 2026-04-01T09:49:20.439 INFO:teuthology.orchestra.run.vm00.stdout:CPE_NAME="cpe:/o:rocky:rocky:9::baseos" 2026-04-01T09:49:20.439 INFO:teuthology.orchestra.run.vm00.stdout:HOME_URL="https://rockylinux.org/" 2026-04-01T09:49:20.439 INFO:teuthology.orchestra.run.vm00.stdout:VENDOR_NAME="RESF" 2026-04-01T09:49:20.439 INFO:teuthology.orchestra.run.vm00.stdout:VENDOR_URL="https://resf.org/" 2026-04-01T09:49:20.439 INFO:teuthology.orchestra.run.vm00.stdout:BUG_REPORT_URL="https://bugs.rockylinux.org/" 2026-04-01T09:49:20.439 INFO:teuthology.orchestra.run.vm00.stdout:SUPPORT_END="2032-05-31" 2026-04-01T09:49:20.439 INFO:teuthology.orchestra.run.vm00.stdout:ROCKY_SUPPORT_PRODUCT="Rocky-Linux-9" 2026-04-01T09:49:20.439 INFO:teuthology.orchestra.run.vm00.stdout:ROCKY_SUPPORT_PRODUCT_VERSION="9.7" 2026-04-01T09:49:20.439 INFO:teuthology.orchestra.run.vm00.stdout:REDHAT_SUPPORT_PRODUCT="Rocky Linux" 2026-04-01T09:49:20.439 INFO:teuthology.orchestra.run.vm00.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="9.7" 2026-04-01T09:49:20.440 INFO:teuthology.lock.ops:Updating vm00.local on lock server 2026-04-01T09:49:20.445 DEBUG:teuthology.orchestra.run.vm03:> uname -m 2026-04-01T09:49:20.464 INFO:teuthology.orchestra.run.vm03.stdout:x86_64 2026-04-01T09:49:20.464 DEBUG:teuthology.orchestra.run.vm03:> cat /etc/os-release 2026-04-01T09:49:20.521 INFO:teuthology.orchestra.run.vm03.stdout:NAME="Rocky Linux" 2026-04-01T09:49:20.521 INFO:teuthology.orchestra.run.vm03.stdout:VERSION="9.7 (Blue Onyx)" 2026-04-01T09:49:20.521 INFO:teuthology.orchestra.run.vm03.stdout:ID="rocky" 2026-04-01T09:49:20.521 INFO:teuthology.orchestra.run.vm03.stdout:ID_LIKE="rhel centos fedora" 2026-04-01T09:49:20.521 INFO:teuthology.orchestra.run.vm03.stdout:VERSION_ID="9.7" 2026-04-01T09:49:20.521 INFO:teuthology.orchestra.run.vm03.stdout:PLATFORM_ID="platform:el9" 2026-04-01T09:49:20.521 INFO:teuthology.orchestra.run.vm03.stdout:PRETTY_NAME="Rocky Linux 9.7 (Blue Onyx)" 2026-04-01T09:49:20.521 INFO:teuthology.orchestra.run.vm03.stdout:ANSI_COLOR="0;32" 2026-04-01T09:49:20.521 INFO:teuthology.orchestra.run.vm03.stdout:LOGO="fedora-logo-icon" 2026-04-01T09:49:20.521 INFO:teuthology.orchestra.run.vm03.stdout:CPE_NAME="cpe:/o:rocky:rocky:9::baseos" 2026-04-01T09:49:20.521 INFO:teuthology.orchestra.run.vm03.stdout:HOME_URL="https://rockylinux.org/" 2026-04-01T09:49:20.521 INFO:teuthology.orchestra.run.vm03.stdout:VENDOR_NAME="RESF" 2026-04-01T09:49:20.521 INFO:teuthology.orchestra.run.vm03.stdout:VENDOR_URL="https://resf.org/" 2026-04-01T09:49:20.521 INFO:teuthology.orchestra.run.vm03.stdout:BUG_REPORT_URL="https://bugs.rockylinux.org/" 2026-04-01T09:49:20.521 INFO:teuthology.orchestra.run.vm03.stdout:SUPPORT_END="2032-05-31" 2026-04-01T09:49:20.521 INFO:teuthology.orchestra.run.vm03.stdout:ROCKY_SUPPORT_PRODUCT="Rocky-Linux-9" 2026-04-01T09:49:20.521 INFO:teuthology.orchestra.run.vm03.stdout:ROCKY_SUPPORT_PRODUCT_VERSION="9.7" 2026-04-01T09:49:20.521 INFO:teuthology.orchestra.run.vm03.stdout:REDHAT_SUPPORT_PRODUCT="Rocky Linux" 2026-04-01T09:49:20.521 INFO:teuthology.orchestra.run.vm03.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="9.7" 2026-04-01T09:49:20.521 INFO:teuthology.lock.ops:Updating vm03.local on lock server 2026-04-01T09:49:20.526 DEBUG:teuthology.orchestra.run.vm07:> uname -m 2026-04-01T09:49:20.543 INFO:teuthology.orchestra.run.vm07.stdout:x86_64 2026-04-01T09:49:20.543 DEBUG:teuthology.orchestra.run.vm07:> cat /etc/os-release 2026-04-01T09:49:20.601 INFO:teuthology.orchestra.run.vm07.stdout:NAME="Rocky Linux" 2026-04-01T09:49:20.601 INFO:teuthology.orchestra.run.vm07.stdout:VERSION="9.7 (Blue Onyx)" 2026-04-01T09:49:20.601 INFO:teuthology.orchestra.run.vm07.stdout:ID="rocky" 2026-04-01T09:49:20.601 INFO:teuthology.orchestra.run.vm07.stdout:ID_LIKE="rhel centos fedora" 2026-04-01T09:49:20.601 INFO:teuthology.orchestra.run.vm07.stdout:VERSION_ID="9.7" 2026-04-01T09:49:20.601 INFO:teuthology.orchestra.run.vm07.stdout:PLATFORM_ID="platform:el9" 2026-04-01T09:49:20.601 INFO:teuthology.orchestra.run.vm07.stdout:PRETTY_NAME="Rocky Linux 9.7 (Blue Onyx)" 2026-04-01T09:49:20.601 INFO:teuthology.orchestra.run.vm07.stdout:ANSI_COLOR="0;32" 2026-04-01T09:49:20.601 INFO:teuthology.orchestra.run.vm07.stdout:LOGO="fedora-logo-icon" 2026-04-01T09:49:20.601 INFO:teuthology.orchestra.run.vm07.stdout:CPE_NAME="cpe:/o:rocky:rocky:9::baseos" 2026-04-01T09:49:20.601 INFO:teuthology.orchestra.run.vm07.stdout:HOME_URL="https://rockylinux.org/" 2026-04-01T09:49:20.601 INFO:teuthology.orchestra.run.vm07.stdout:VENDOR_NAME="RESF" 2026-04-01T09:49:20.601 INFO:teuthology.orchestra.run.vm07.stdout:VENDOR_URL="https://resf.org/" 2026-04-01T09:49:20.601 INFO:teuthology.orchestra.run.vm07.stdout:BUG_REPORT_URL="https://bugs.rockylinux.org/" 2026-04-01T09:49:20.601 INFO:teuthology.orchestra.run.vm07.stdout:SUPPORT_END="2032-05-31" 2026-04-01T09:49:20.601 INFO:teuthology.orchestra.run.vm07.stdout:ROCKY_SUPPORT_PRODUCT="Rocky-Linux-9" 2026-04-01T09:49:20.601 INFO:teuthology.orchestra.run.vm07.stdout:ROCKY_SUPPORT_PRODUCT_VERSION="9.7" 2026-04-01T09:49:20.601 INFO:teuthology.orchestra.run.vm07.stdout:REDHAT_SUPPORT_PRODUCT="Rocky Linux" 2026-04-01T09:49:20.601 INFO:teuthology.orchestra.run.vm07.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="9.7" 2026-04-01T09:49:20.601 INFO:teuthology.lock.ops:Updating vm07.local on lock server 2026-04-01T09:49:20.609 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-04-01T09:49:20.612 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-04-01T09:49:20.613 INFO:teuthology.task.internal:Checking for old test directory... 2026-04-01T09:49:20.613 DEBUG:teuthology.orchestra.run.vm00:> test '!' -e /home/ubuntu/cephtest 2026-04-01T09:49:20.616 DEBUG:teuthology.orchestra.run.vm03:> test '!' -e /home/ubuntu/cephtest 2026-04-01T09:49:20.617 DEBUG:teuthology.orchestra.run.vm07:> test '!' -e /home/ubuntu/cephtest 2026-04-01T09:49:20.659 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-04-01T09:49:20.660 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-04-01T09:49:20.660 DEBUG:teuthology.orchestra.run.vm00:> test -z $(ls -A /var/lib/ceph) 2026-04-01T09:49:20.673 DEBUG:teuthology.orchestra.run.vm03:> test -z $(ls -A /var/lib/ceph) 2026-04-01T09:49:20.676 DEBUG:teuthology.orchestra.run.vm07:> test -z $(ls -A /var/lib/ceph) 2026-04-01T09:49:20.689 INFO:teuthology.orchestra.run.vm03.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-04-01T09:49:20.690 INFO:teuthology.orchestra.run.vm00.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-04-01T09:49:20.717 INFO:teuthology.orchestra.run.vm07.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-04-01T09:49:20.718 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-04-01T09:49:20.730 DEBUG:teuthology.orchestra.run.vm00:> test -e /ceph-qa-ready 2026-04-01T09:49:20.746 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-04-01T09:49:20.955 DEBUG:teuthology.orchestra.run.vm03:> test -e /ceph-qa-ready 2026-04-01T09:49:20.970 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-04-01T09:49:21.187 DEBUG:teuthology.orchestra.run.vm07:> test -e /ceph-qa-ready 2026-04-01T09:49:21.204 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-04-01T09:49:21.401 INFO:teuthology.run_tasks:Running task internal.base... 2026-04-01T09:49:21.404 INFO:teuthology.task.internal:Creating test directory... 2026-04-01T09:49:21.404 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-04-01T09:49:21.406 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-04-01T09:49:21.409 DEBUG:teuthology.orchestra.run.vm07:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-04-01T09:49:21.428 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-04-01T09:49:21.429 INFO:teuthology.run_tasks:Running task internal.archive... 2026-04-01T09:49:21.430 INFO:teuthology.task.internal:Creating archive directory... 2026-04-01T09:49:21.430 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-04-01T09:49:21.464 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-04-01T09:49:21.469 DEBUG:teuthology.orchestra.run.vm07:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-04-01T09:49:21.490 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-04-01T09:49:21.492 INFO:teuthology.task.internal:Enabling coredump saving... 2026-04-01T09:49:21.492 DEBUG:teuthology.orchestra.run.vm00:> test -f /run/.containerenv -o -f /.dockerenv 2026-04-01T09:49:21.534 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-04-01T09:49:21.534 DEBUG:teuthology.orchestra.run.vm03:> test -f /run/.containerenv -o -f /.dockerenv 2026-04-01T09:49:21.551 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-04-01T09:49:21.552 DEBUG:teuthology.orchestra.run.vm07:> test -f /run/.containerenv -o -f /.dockerenv 2026-04-01T09:49:21.564 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-04-01T09:49:21.564 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-04-01T09:49:21.576 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-04-01T09:49:21.594 DEBUG:teuthology.orchestra.run.vm07:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-04-01T09:49:21.601 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-04-01T09:49:21.611 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-04-01T09:49:21.620 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-04-01T09:49:21.629 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-04-01T09:49:21.630 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-04-01T09:49:21.638 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-04-01T09:49:21.640 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-04-01T09:49:21.641 INFO:teuthology.task.internal:Configuring sudo... 2026-04-01T09:49:21.641 DEBUG:teuthology.orchestra.run.vm00:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-04-01T09:49:21.655 DEBUG:teuthology.orchestra.run.vm03:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-04-01T09:49:21.674 DEBUG:teuthology.orchestra.run.vm07:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-04-01T09:49:21.702 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-04-01T09:49:21.704 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-04-01T09:49:21.704 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-04-01T09:49:21.722 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-04-01T09:49:21.741 DEBUG:teuthology.orchestra.run.vm07:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-04-01T09:49:21.758 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-04-01T09:49:21.805 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-04-01T09:49:21.865 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:49:21.865 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-04-01T09:49:21.925 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-04-01T09:49:21.950 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-04-01T09:49:22.009 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:49:22.009 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-04-01T09:49:22.070 DEBUG:teuthology.orchestra.run.vm07:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-04-01T09:49:22.138 DEBUG:teuthology.orchestra.run.vm07:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-04-01T09:49:22.195 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-04-01T09:49:22.195 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-04-01T09:49:22.252 DEBUG:teuthology.orchestra.run.vm00:> sudo service rsyslog restart 2026-04-01T09:49:22.254 DEBUG:teuthology.orchestra.run.vm03:> sudo service rsyslog restart 2026-04-01T09:49:22.256 DEBUG:teuthology.orchestra.run.vm07:> sudo service rsyslog restart 2026-04-01T09:49:22.281 INFO:teuthology.orchestra.run.vm00.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-04-01T09:49:22.282 INFO:teuthology.orchestra.run.vm03.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-04-01T09:49:22.323 INFO:teuthology.orchestra.run.vm07.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-04-01T09:49:22.643 INFO:teuthology.run_tasks:Running task internal.timer... 2026-04-01T09:49:22.648 INFO:teuthology.task.internal:Starting timer... 2026-04-01T09:49:22.648 INFO:teuthology.run_tasks:Running task pcp... 2026-04-01T09:49:22.698 INFO:teuthology.run_tasks:Running task selinux... 2026-04-01T09:49:22.730 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:getty_t:s0']} 2026-04-01T09:49:22.730 INFO:teuthology.task.selinux:Excluding vm00: VMs are not yet supported 2026-04-01T09:49:22.730 INFO:teuthology.task.selinux:Excluding vm03: VMs are not yet supported 2026-04-01T09:49:22.730 INFO:teuthology.task.selinux:Excluding vm07: VMs are not yet supported 2026-04-01T09:49:22.730 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-04-01T09:49:22.730 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-04-01T09:49:22.730 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-04-01T09:49:22.731 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-04-01T09:49:22.743 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'repo': 'https://github.com/kshtsk/ceph-cm-ansible.git', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'logical_volumes': {'lv_1': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_2': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_3': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_4': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}}, 'timezone': 'UTC', 'volume_groups': {'vg_nvme': {'pvs': '/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde'}}}} 2026-04-01T09:49:22.749 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/kshtsk/ceph-cm-ansible.git 2026-04-01T09:49:22.759 INFO:teuthology.repo_utils:Fetching github.com_kshtsk_ceph-cm-ansible_main from origin 2026-04-01T09:49:23.368 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_kshtsk_ceph-cm-ansible_main to origin/main 2026-04-01T09:49:23.375 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-04-01T09:49:23.375 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "logical_volumes": {"lv_1": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_2": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_3": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_4": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}}, "timezone": "UTC", "volume_groups": {"vg_nvme": {"pvs": "/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde"}}}' -i /tmp/teuth_ansible_inventoryvz8p49od --limit vm00.local,vm03.local,vm07.local /home/teuthos/src/github.com_kshtsk_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-04-01T09:51:30.030 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm00.local'), Remote(name='ubuntu@vm03.local'), Remote(name='ubuntu@vm07.local')] 2026-04-01T09:51:30.030 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm00.local' 2026-04-01T09:51:30.031 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-04-01T09:51:30.103 DEBUG:teuthology.orchestra.run.vm00:> true 2026-04-01T09:51:30.200 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm00.local' 2026-04-01T09:51:30.200 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm03.local' 2026-04-01T09:51:30.201 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-04-01T09:51:30.266 DEBUG:teuthology.orchestra.run.vm03:> true 2026-04-01T09:51:30.340 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm03.local' 2026-04-01T09:51:30.341 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm07.local' 2026-04-01T09:51:30.341 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm07.local', 'username': 'ubuntu', 'timeout': 60} 2026-04-01T09:51:30.412 DEBUG:teuthology.orchestra.run.vm07:> true 2026-04-01T09:51:30.492 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm07.local' 2026-04-01T09:51:30.492 INFO:teuthology.run_tasks:Running task clock... 2026-04-01T09:51:30.495 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-04-01T09:51:30.495 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-04-01T09:51:30.496 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-04-01T09:51:30.498 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-04-01T09:51:30.498 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-04-01T09:51:30.500 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-04-01T09:51:30.500 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-04-01T09:51:30.540 INFO:teuthology.orchestra.run.vm03.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-04-01T09:51:30.543 INFO:teuthology.orchestra.run.vm00.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-04-01T09:51:30.558 INFO:teuthology.orchestra.run.vm03.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-04-01T09:51:30.560 INFO:teuthology.orchestra.run.vm00.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-04-01T09:51:30.575 INFO:teuthology.orchestra.run.vm07.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-04-01T09:51:30.592 INFO:teuthology.orchestra.run.vm00.stderr:sudo: ntpd: command not found 2026-04-01T09:51:30.592 INFO:teuthology.orchestra.run.vm03.stderr:sudo: ntpd: command not found 2026-04-01T09:51:30.593 INFO:teuthology.orchestra.run.vm07.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-04-01T09:51:30.608 INFO:teuthology.orchestra.run.vm03.stdout:506 Cannot talk to daemon 2026-04-01T09:51:30.609 INFO:teuthology.orchestra.run.vm00.stdout:506 Cannot talk to daemon 2026-04-01T09:51:30.617 INFO:teuthology.orchestra.run.vm07.stderr:sudo: ntpd: command not found 2026-04-01T09:51:30.625 INFO:teuthology.orchestra.run.vm03.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-04-01T09:51:30.628 INFO:teuthology.orchestra.run.vm00.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-04-01T09:51:30.629 INFO:teuthology.orchestra.run.vm07.stdout:506 Cannot talk to daemon 2026-04-01T09:51:30.642 INFO:teuthology.orchestra.run.vm03.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-04-01T09:51:30.647 INFO:teuthology.orchestra.run.vm00.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-04-01T09:51:30.648 INFO:teuthology.orchestra.run.vm07.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-04-01T09:51:30.665 INFO:teuthology.orchestra.run.vm07.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-04-01T09:51:30.702 INFO:teuthology.orchestra.run.vm03.stderr:bash: line 1: ntpq: command not found 2026-04-01T09:51:30.718 INFO:teuthology.orchestra.run.vm00.stderr:bash: line 1: ntpq: command not found 2026-04-01T09:51:30.726 INFO:teuthology.orchestra.run.vm07.stderr:bash: line 1: ntpq: command not found 2026-04-01T09:51:30.779 INFO:teuthology.orchestra.run.vm00.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-04-01T09:51:30.780 INFO:teuthology.orchestra.run.vm00.stdout:=============================================================================== 2026-04-01T09:51:30.780 INFO:teuthology.orchestra.run.vm00.stdout:^? 82.165.178.31 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-04-01T09:51:30.780 INFO:teuthology.orchestra.run.vm00.stdout:^? static.222.16.42.77.clie> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-04-01T09:51:30.780 INFO:teuthology.orchestra.run.vm00.stdout:^? node-4.infogral.is 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-04-01T09:51:30.780 INFO:teuthology.orchestra.run.vm00.stdout:^? 172-104-149-161.ip.linod> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-04-01T09:51:30.780 INFO:teuthology.orchestra.run.vm03.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-04-01T09:51:30.780 INFO:teuthology.orchestra.run.vm03.stdout:=============================================================================== 2026-04-01T09:51:30.780 INFO:teuthology.orchestra.run.vm03.stdout:^? 82.165.178.31 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-04-01T09:51:30.780 INFO:teuthology.orchestra.run.vm03.stdout:^? static.222.16.42.77.clie> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-04-01T09:51:30.780 INFO:teuthology.orchestra.run.vm03.stdout:^? node-4.infogral.is 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-04-01T09:51:30.780 INFO:teuthology.orchestra.run.vm03.stdout:^? 172-104-149-161.ip.linod> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-04-01T09:51:30.780 INFO:teuthology.orchestra.run.vm07.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-04-01T09:51:30.781 INFO:teuthology.orchestra.run.vm07.stdout:=============================================================================== 2026-04-01T09:51:30.781 INFO:teuthology.orchestra.run.vm07.stdout:^? 172-104-149-161.ip.linod> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-04-01T09:51:30.781 INFO:teuthology.orchestra.run.vm07.stdout:^? 82.165.178.31 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-04-01T09:51:30.781 INFO:teuthology.orchestra.run.vm07.stdout:^? static.222.16.42.77.clie> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-04-01T09:51:30.781 INFO:teuthology.orchestra.run.vm07.stdout:^? node-4.infogral.is 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-04-01T09:51:30.781 INFO:teuthology.run_tasks:Running task install... 2026-04-01T09:51:30.784 DEBUG:teuthology.task.install:project ceph 2026-04-01T09:51:30.784 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': '0597158282e6d69429e60df2354a6c8eed0e5bce'}, 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}, 'repos': [{'name': 'ceph-source', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/SRPMS'}, {'name': 'ceph-noarch', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/noarch'}, {'name': 'ceph', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/x86_64'}]} 2026-04-01T09:51:30.784 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': '0597158282e6d69429e60df2354a6c8eed0e5bce', 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}} 2026-04-01T09:51:30.784 INFO:teuthology.task.install:Using flavor: default 2026-04-01T09:51:30.787 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-04-01T09:51:30.787 INFO:teuthology.task.install:extra packages: [] 2026-04-01T09:51:30.788 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': '0597158282e6d69429e60df2354a6c8eed0e5bce', 'tag': None, 'wait_for_package': False, 'repos': [{'name': 'ceph-source', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/SRPMS'}, {'name': 'ceph-noarch', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/noarch'}, {'name': 'ceph', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/x86_64'}]} 2026-04-01T09:51:30.788 DEBUG:teuthology.task.install.rpm:Adding repos: [{'name': 'ceph-source', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/SRPMS'}, {'name': 'ceph-noarch', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/noarch'}, {'name': 'ceph', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/x86_64'}] 2026-04-01T09:51:30.788 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:51:30.788 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/yum.repos.d/ceph-source.repo 2026-04-01T09:51:30.788 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': '0597158282e6d69429e60df2354a6c8eed0e5bce', 'tag': None, 'wait_for_package': False, 'repos': [{'name': 'ceph-source', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/SRPMS'}, {'name': 'ceph-noarch', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/noarch'}, {'name': 'ceph', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/x86_64'}]} 2026-04-01T09:51:30.788 DEBUG:teuthology.task.install.rpm:Adding repos: [{'name': 'ceph-source', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/SRPMS'}, {'name': 'ceph-noarch', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/noarch'}, {'name': 'ceph', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/x86_64'}] 2026-04-01T09:51:30.788 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:51:30.788 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/yum.repos.d/ceph-source.repo 2026-04-01T09:51:30.788 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': '0597158282e6d69429e60df2354a6c8eed0e5bce', 'tag': None, 'wait_for_package': False, 'repos': [{'name': 'ceph-source', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/SRPMS'}, {'name': 'ceph-noarch', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/noarch'}, {'name': 'ceph', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/x86_64'}]} 2026-04-01T09:51:30.788 DEBUG:teuthology.task.install.rpm:Adding repos: [{'name': 'ceph-source', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/SRPMS'}, {'name': 'ceph-noarch', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/noarch'}, {'name': 'ceph', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/x86_64'}] 2026-04-01T09:51:30.788 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-04-01T09:51:30.788 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/yum.repos.d/ceph-source.repo 2026-04-01T09:51:30.822 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:51:30.823 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/yum.repos.d/ceph-noarch.repo 2026-04-01T09:51:30.853 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:51:30.853 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/yum.repos.d/ceph-noarch.repo 2026-04-01T09:51:30.854 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-04-01T09:51:30.854 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/yum.repos.d/ceph-noarch.repo 2026-04-01T09:51:30.898 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:51:30.899 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-04-01T09:51:30.922 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:51:30.922 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-04-01T09:51:30.927 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-04-01T09:51:30.927 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-04-01T09:51:30.970 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-jmespath, python3-xmltodict, s3cmd on remote rpm x86_64 2026-04-01T09:51:30.970 DEBUG:teuthology.orchestra.run.vm00:> sudo yum clean all 2026-04-01T09:51:30.995 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-jmespath, python3-xmltodict, s3cmd on remote rpm x86_64 2026-04-01T09:51:30.995 DEBUG:teuthology.orchestra.run.vm03:> sudo yum clean all 2026-04-01T09:51:31.004 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-jmespath, python3-xmltodict, s3cmd on remote rpm x86_64 2026-04-01T09:51:31.004 DEBUG:teuthology.orchestra.run.vm07:> sudo yum clean all 2026-04-01T09:51:31.183 INFO:teuthology.orchestra.run.vm00.stdout:47 files removed 2026-04-01T09:51:31.205 DEBUG:teuthology.orchestra.run.vm00:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-jmespath python3-xmltodict s3cmd 2026-04-01T09:51:31.223 INFO:teuthology.orchestra.run.vm03.stdout:47 files removed 2026-04-01T09:51:31.225 INFO:teuthology.orchestra.run.vm07.stdout:47 files removed 2026-04-01T09:51:31.267 DEBUG:teuthology.orchestra.run.vm07:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-jmespath python3-xmltodict s3cmd 2026-04-01T09:51:31.276 DEBUG:teuthology.orchestra.run.vm03:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-jmespath python3-xmltodict s3cmd 2026-04-01T09:51:31.625 INFO:teuthology.orchestra.run.vm00.stdout:ceph 396 kB/s | 90 kB 00:00 2026-04-01T09:51:31.693 INFO:teuthology.orchestra.run.vm07.stdout:ceph 403 kB/s | 90 kB 00:00 2026-04-01T09:51:31.721 INFO:teuthology.orchestra.run.vm03.stdout:ceph 382 kB/s | 90 kB 00:00 2026-04-01T09:51:31.851 INFO:teuthology.orchestra.run.vm00.stdout:ceph-noarch 126 kB/s | 25 kB 00:00 2026-04-01T09:51:31.924 INFO:teuthology.orchestra.run.vm07.stdout:ceph-noarch 127 kB/s | 25 kB 00:00 2026-04-01T09:51:31.944 INFO:teuthology.orchestra.run.vm03.stdout:ceph-noarch 129 kB/s | 25 kB 00:00 2026-04-01T09:51:32.045 INFO:teuthology.orchestra.run.vm00.stdout:ceph-source 13 kB/s | 2.3 kB 00:00 2026-04-01T09:51:32.128 INFO:teuthology.orchestra.run.vm07.stdout:ceph-source 12 kB/s | 2.3 kB 00:00 2026-04-01T09:51:32.141 INFO:teuthology.orchestra.run.vm03.stdout:ceph-source 13 kB/s | 2.3 kB 00:00 2026-04-01T09:51:32.517 INFO:teuthology.orchestra.run.vm00.stdout:Extra Packages for Enterprise Linux 45 MB/s | 20 MB 00:00 2026-04-01T09:51:32.710 INFO:teuthology.orchestra.run.vm07.stdout:Extra Packages for Enterprise Linux 37 MB/s | 20 MB 00:00 2026-04-01T09:51:32.816 INFO:teuthology.orchestra.run.vm03.stdout:Extra Packages for Enterprise Linux 31 MB/s | 20 MB 00:00 2026-04-01T09:51:37.661 INFO:teuthology.orchestra.run.vm00.stdout:lab-extras 57 kB/s | 50 kB 00:00 2026-04-01T09:51:37.702 INFO:teuthology.orchestra.run.vm07.stdout:lab-extras 56 kB/s | 50 kB 00:00 2026-04-01T09:51:37.843 INFO:teuthology.orchestra.run.vm03.stdout:lab-extras 57 kB/s | 50 kB 00:00 2026-04-01T09:51:38.749 INFO:teuthology.orchestra.run.vm07.stdout:Rocky Linux 9 - BaseOS 18 MB/s | 17 MB 00:00 2026-04-01T09:51:38.750 INFO:teuthology.orchestra.run.vm00.stdout:Rocky Linux 9 - BaseOS 17 MB/s | 17 MB 00:01 2026-04-01T09:51:39.083 INFO:teuthology.orchestra.run.vm03.stdout:Rocky Linux 9 - BaseOS 15 MB/s | 17 MB 00:01 2026-04-01T09:51:40.644 INFO:teuthology.orchestra.run.vm07.stdout:Rocky Linux 9 - AppStream 23 MB/s | 17 MB 00:00 2026-04-01T09:51:40.896 INFO:teuthology.orchestra.run.vm00.stdout:Rocky Linux 9 - AppStream 22 MB/s | 17 MB 00:00 2026-04-01T09:51:41.225 INFO:teuthology.orchestra.run.vm03.stdout:Rocky Linux 9 - AppStream 21 MB/s | 17 MB 00:00 2026-04-01T09:51:42.990 INFO:teuthology.orchestra.run.vm07.stdout:Rocky Linux 9 - CRB 8.1 MB/s | 4.3 MB 00:00 2026-04-01T09:51:43.319 INFO:teuthology.orchestra.run.vm00.stdout:Rocky Linux 9 - CRB 7.9 MB/s | 4.3 MB 00:00 2026-04-01T09:51:43.651 INFO:teuthology.orchestra.run.vm03.stdout:Rocky Linux 9 - CRB 7.7 MB/s | 4.3 MB 00:00 2026-04-01T09:51:43.930 INFO:teuthology.orchestra.run.vm07.stdout:Rocky Linux 9 - Extras 46 kB/s | 17 kB 00:00 2026-04-01T09:51:44.271 INFO:teuthology.orchestra.run.vm00.stdout:Rocky Linux 9 - Extras 48 kB/s | 17 kB 00:00 2026-04-01T09:51:44.620 INFO:teuthology.orchestra.run.vm03.stdout:Rocky Linux 9 - Extras 48 kB/s | 17 kB 00:00 2026-04-01T09:51:45.241 INFO:teuthology.orchestra.run.vm07.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-04-01T09:51:45.241 INFO:teuthology.orchestra.run.vm07.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-04-01T09:51:45.270 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T09:51:45.275 INFO:teuthology.orchestra.run.vm07.stdout:============================================================================================= 2026-04-01T09:51:45.275 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-04-01T09:51:45.275 INFO:teuthology.orchestra.run.vm07.stdout:============================================================================================= 2026-04-01T09:51:45.275 INFO:teuthology.orchestra.run.vm07.stdout:Installing: 2026-04-01T09:51:45.275 INFO:teuthology.orchestra.run.vm07.stdout: bzip2 x86_64 1.0.8-10.el9_5 baseos 51 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ceph x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 6.5 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ceph-base x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 5.9 M 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ceph-fuse x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 940 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ceph-immutable-object-cache x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 154 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 961 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-cephadm noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 173 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-dashboard noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 15 M 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-diskprediction-local noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 7.4 M 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-rook noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 50 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ceph-radosgw x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 24 M 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ceph-test x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 85 M 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ceph-volume noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 297 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: cephadm noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 1.0 M 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-devel x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 34 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 868 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: librados-devel x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 126 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: perl-Test-Harness noarch 1:3.42-461.el9 appstream 267 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: python3-cephfs x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 163 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: python3-jmespath noarch 1.0.1-1.el9_7 appstream 43 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: python3-rados x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 317 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: python3-rbd x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 304 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: python3-rgw x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 99 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: rbd-fuse x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 91 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: rbd-mirror x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 2.9 M 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: rbd-nbd x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 180 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: s3cmd noarch 2.4.0-1.el9 epel 206 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout:Upgrading: 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: librados2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 3.5 M 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: librbd1 x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 2.8 M 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout:Installing dependencies: 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: boost-program-options x86_64 1.75.0-13.el9_7 appstream 104 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: c-ares x86_64 1.19.1-2.el9_4 baseos 110 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ceph-common x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 24 M 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ceph-grafana-dashboards noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 43 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mds x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 2.3 M 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 289 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 5.0 M 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ceph-osd x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 17 M 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ceph-prometheus-alerts noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 17 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ceph-selinux x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 25 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: cryptsetup x86_64 2.7.2-4.el9 baseos 310 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas x86_64 3.0.4-8.el9.0.1 appstream 30 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-netlib x86_64 3.0.4-8.el9.0.1 appstream 3.0 M 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-openblas-openmp x86_64 3.0.4-8.el9.0.1 appstream 15 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: fuse x86_64 2.9.9-17.el9 baseos 78 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 41 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-proxy2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 24 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: libcephsqlite x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 164 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: libconfig x86_64 1.7.2-9.el9 baseos 71 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: libgfortran x86_64 11.5.0-11.el9 baseos 794 k 2026-04-01T09:51:45.276 INFO:teuthology.orchestra.run.vm07.stdout: libnbd x86_64 1.20.3-4.el9 appstream 171 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 159 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: libquadmath x86_64 11.5.0-11.el9 baseos 184 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 44 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 250 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: librgw2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 6.4 M 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 243 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: libxslt x86_64 1.1.34-13.el9_6 appstream 239 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: lmdb-libs x86_64 0.9.29-3.el9 baseos 60 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 282 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: lua x86_64 5.4.4-4.el9 appstream 187 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: lua-devel x86_64 5.4.4-4.el9 crb 21 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: mailcap noarch 2.1.49-5.el9.0.2 baseos 32 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: openblas x86_64 0.3.29-1.el9 appstream 41 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: pciutils x86_64 3.7.0-7.el9 baseos 92 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: perl-Benchmark noarch 1.23-481.1.el9_6 appstream 25 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: protobuf x86_64 3.14.0-17.el9_7 appstream 1.0 M 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: protobuf-compiler x86_64 3.14.0-17.el9_7 crb 862 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-babel noarch 2.9.1-2.el9 appstream 5.8 M 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-argparse x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 45 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 163 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 241 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-cheroot noarch 10.0.1-5.el9 epel 173 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy noarch 18.10.0-5.el9 epel 290 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-cryptography x86_64 36.0.1-5.el9_6 baseos 1.2 M 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-devel x86_64 3.9.23-2.el9 appstream 205 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-isodate noarch 0.6.1-3.el9 epel 56 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-jinja2 noarch 2.11.3-8.el9_5 appstream 228 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 166 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-lxml x86_64 4.6.5-3.el9 appstream 1.2 M 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 32 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-msgpack x86_64 1.0.3-2.el9 epel 86 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy x86_64 1:1.23.5-2.el9_7 appstream 5.8 M 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9_7 appstream 368 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-packaging noarch 20.9-5.el9 appstream 69 k 2026-04-01T09:51:45.277 INFO:teuthology.orchestra.run.vm07.stdout: python3-ply noarch 3.11-14.el9.0.1 baseos 103 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-protobuf noarch 3.14.0-17.el9_7 appstream 237 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1 noarch 0.4.8-7.el9_7 appstream 132 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9_7 appstream 210 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-pycparser noarch 2.20-6.el9 baseos 124 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyparsing noarch 2.4.7-9.el9.0.1 baseos 150 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests noarch 2.25.1-10.el9_6 baseos 115 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 43 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-toml noarch 0.10.2-6.el9.0.1 appstream 44 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-urllib3 noarch 1.26.5-6.el9_7.1 baseos 191 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-xmlsec x86_64 1.3.13-1.el9 epel 48 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: qatlib x86_64 24.09.0-1.el9 appstream 221 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 65 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: socat x86_64 1.7.4.1-8.el9 appstream 299 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: unzip x86_64 6.0-59.el9 baseos 180 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: xmlsec1 x86_64 1.2.29-13.el9 appstream 188 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: xmlsec1-openssl x86_64 1.2.29-13.el9 appstream 89 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 63 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: zip x86_64 3.0-35.el9 baseos 263 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout:Installing weak dependencies: 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-k8sevents noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 22 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-daemon x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 35 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: nvme-cli x86_64 2.13-1.el9 baseos 1.0 M 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-influxdb noarch 5.3.1-1.el9 epel 139 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: python3-saml noarch 1.16.0-1.el9 epel 125 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: qatlib-service x86_64 24.09.0-1.el9 appstream 36 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: smartmontools x86_64 1:7.2-9.el9 baseos 551 k 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout:============================================================================================= 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout:Install 150 Packages 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout:Upgrade 2 Packages 2026-04-01T09:51:45.278 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:51:45.279 INFO:teuthology.orchestra.run.vm07.stdout:Total download size: 274 M 2026-04-01T09:51:45.279 INFO:teuthology.orchestra.run.vm07.stdout:Downloading Packages: 2026-04-01T09:51:45.528 INFO:teuthology.orchestra.run.vm00.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-04-01T09:51:45.529 INFO:teuthology.orchestra.run.vm00.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-04-01T09:51:45.566 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout:============================================================================================= 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout:============================================================================================= 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout:Installing: 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: bzip2 x86_64 1.0.8-10.el9_5 baseos 51 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: ceph x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 6.5 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 5.9 M 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: ceph-fuse x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 940 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: ceph-immutable-object-cache x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 154 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 961 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 173 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 15 M 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-diskprediction-local noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 7.4 M 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-rook noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 50 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: ceph-radosgw x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 24 M 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: ceph-test x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 85 M 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: ceph-volume noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 297 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: cephadm noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 1.0 M 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-devel x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 34 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 868 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: librados-devel x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 126 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: perl-Test-Harness noarch 1:3.42-461.el9 appstream 267 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: python3-cephfs x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 163 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: python3-jmespath noarch 1.0.1-1.el9_7 appstream 43 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: python3-rados x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 317 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: python3-rbd x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 304 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: python3-rgw x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 99 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: rbd-fuse x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 91 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: rbd-mirror x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 2.9 M 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: rbd-nbd x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 180 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: s3cmd noarch 2.4.0-1.el9 epel 206 k 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout:Upgrading: 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: librados2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 3.5 M 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout: librbd1 x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 2.8 M 2026-04-01T09:51:45.574 INFO:teuthology.orchestra.run.vm00.stdout:Installing dependencies: 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: boost-program-options x86_64 1.75.0-13.el9_7 appstream 104 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: c-ares x86_64 1.19.1-2.el9_4 baseos 110 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: ceph-common x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 24 M 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: ceph-grafana-dashboards noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 43 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 2.3 M 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 289 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 5.0 M 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: ceph-osd x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 17 M 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: ceph-prometheus-alerts noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 17 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: ceph-selinux x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 25 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: cryptsetup x86_64 2.7.2-4.el9 baseos 310 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas x86_64 3.0.4-8.el9.0.1 appstream 30 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-netlib x86_64 3.0.4-8.el9.0.1 appstream 3.0 M 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-openblas-openmp x86_64 3.0.4-8.el9.0.1 appstream 15 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: fuse x86_64 2.9.9-17.el9 baseos 78 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 41 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-proxy2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 24 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: libcephsqlite x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 164 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: libconfig x86_64 1.7.2-9.el9 baseos 71 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: libgfortran x86_64 11.5.0-11.el9 baseos 794 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: libnbd x86_64 1.20.3-4.el9 appstream 171 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 159 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: libquadmath x86_64 11.5.0-11.el9 baseos 184 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 44 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 250 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: librgw2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 6.4 M 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 243 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: libxslt x86_64 1.1.34-13.el9_6 appstream 239 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: lmdb-libs x86_64 0.9.29-3.el9 baseos 60 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 282 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: lua x86_64 5.4.4-4.el9 appstream 187 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: lua-devel x86_64 5.4.4-4.el9 crb 21 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: mailcap noarch 2.1.49-5.el9.0.2 baseos 32 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: openblas x86_64 0.3.29-1.el9 appstream 41 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: pciutils x86_64 3.7.0-7.el9 baseos 92 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: perl-Benchmark noarch 1.23-481.1.el9_6 appstream 25 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: protobuf x86_64 3.14.0-17.el9_7 appstream 1.0 M 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-compiler x86_64 3.14.0-17.el9_7 crb 862 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: python3-babel noarch 2.9.1-2.el9 appstream 5.8 M 2026-04-01T09:51:45.575 INFO:teuthology.orchestra.run.vm00.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-argparse x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 45 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 163 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 241 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-cheroot noarch 10.0.1-5.el9 epel 173 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy noarch 18.10.0-5.el9 epel 290 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-cryptography x86_64 36.0.1-5.el9_6 baseos 1.2 M 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-devel x86_64 3.9.23-2.el9 appstream 205 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-isodate noarch 0.6.1-3.el9 epel 56 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-jinja2 noarch 2.11.3-8.el9_5 appstream 228 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 166 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-lxml x86_64 4.6.5-3.el9 appstream 1.2 M 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 32 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-msgpack x86_64 1.0.3-2.el9 epel 86 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy x86_64 1:1.23.5-2.el9_7 appstream 5.8 M 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9_7 appstream 368 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-packaging noarch 20.9-5.el9 appstream 69 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-ply noarch 3.11-14.el9.0.1 baseos 103 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-protobuf noarch 3.14.0-17.el9_7 appstream 237 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1 noarch 0.4.8-7.el9_7 appstream 132 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9_7 appstream 210 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-pycparser noarch 2.20-6.el9 baseos 124 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyparsing noarch 2.4.7-9.el9.0.1 baseos 150 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests noarch 2.25.1-10.el9_6 baseos 115 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 43 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml noarch 0.10.2-6.el9.0.1 appstream 44 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-urllib3 noarch 1.26.5-6.el9_7.1 baseos 191 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-xmlsec x86_64 1.3.13-1.el9 epel 48 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: qatlib x86_64 24.09.0-1.el9 appstream 221 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 65 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: socat x86_64 1.7.4.1-8.el9 appstream 299 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: unzip x86_64 6.0-59.el9 baseos 180 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: xmlsec1 x86_64 1.2.29-13.el9 appstream 188 k 2026-04-01T09:51:45.576 INFO:teuthology.orchestra.run.vm00.stdout: xmlsec1-openssl x86_64 1.2.29-13.el9 appstream 89 k 2026-04-01T09:51:45.577 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 63 k 2026-04-01T09:51:45.577 INFO:teuthology.orchestra.run.vm00.stdout: zip x86_64 3.0-35.el9 baseos 263 k 2026-04-01T09:51:45.577 INFO:teuthology.orchestra.run.vm00.stdout:Installing weak dependencies: 2026-04-01T09:51:45.577 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-k8sevents noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 22 k 2026-04-01T09:51:45.577 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-daemon x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 35 k 2026-04-01T09:51:45.577 INFO:teuthology.orchestra.run.vm00.stdout: nvme-cli x86_64 2.13-1.el9 baseos 1.0 M 2026-04-01T09:51:45.577 INFO:teuthology.orchestra.run.vm00.stdout: python3-influxdb noarch 5.3.1-1.el9 epel 139 k 2026-04-01T09:51:45.577 INFO:teuthology.orchestra.run.vm00.stdout: python3-saml noarch 1.16.0-1.el9 epel 125 k 2026-04-01T09:51:45.577 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-service x86_64 24.09.0-1.el9 appstream 36 k 2026-04-01T09:51:45.577 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools x86_64 1:7.2-9.el9 baseos 551 k 2026-04-01T09:51:45.577 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:51:45.577 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-04-01T09:51:45.577 INFO:teuthology.orchestra.run.vm00.stdout:============================================================================================= 2026-04-01T09:51:45.577 INFO:teuthology.orchestra.run.vm00.stdout:Install 150 Packages 2026-04-01T09:51:45.577 INFO:teuthology.orchestra.run.vm00.stdout:Upgrade 2 Packages 2026-04-01T09:51:45.577 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:51:45.577 INFO:teuthology.orchestra.run.vm00.stdout:Total download size: 274 M 2026-04-01T09:51:45.577 INFO:teuthology.orchestra.run.vm00.stdout:Downloading Packages: 2026-04-01T09:51:45.849 INFO:teuthology.orchestra.run.vm03.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-04-01T09:51:45.849 INFO:teuthology.orchestra.run.vm03.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-04-01T09:51:45.872 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout:============================================================================================= 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout:============================================================================================= 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout:Installing: 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: bzip2 x86_64 1.0.8-10.el9_5 baseos 51 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: ceph x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 6.5 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 5.9 M 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 940 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: ceph-immutable-object-cache x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 154 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 961 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 173 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 15 M 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-diskprediction-local noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 7.4 M 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-rook noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 50 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: ceph-radosgw x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 24 M 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: ceph-test x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 85 M 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: ceph-volume noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 297 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: cephadm noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 1.0 M 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-devel x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 34 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 868 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: librados-devel x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 126 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: perl-Test-Harness noarch 1:3.42-461.el9 appstream 267 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 163 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: python3-jmespath noarch 1.0.1-1.el9_7 appstream 43 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: python3-rados x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 317 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 304 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: python3-rgw x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 99 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: rbd-fuse x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 91 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: rbd-mirror x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 2.9 M 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: rbd-nbd x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 180 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: s3cmd noarch 2.4.0-1.el9 epel 206 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout:Upgrading: 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: librados2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 3.5 M 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: librbd1 x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 2.8 M 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout:Installing dependencies: 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: boost-program-options x86_64 1.75.0-13.el9_7 appstream 104 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: c-ares x86_64 1.19.1-2.el9_4 baseos 110 k 2026-04-01T09:51:45.877 INFO:teuthology.orchestra.run.vm03.stdout: ceph-common x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 24 M 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: ceph-grafana-dashboards noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 43 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 2.3 M 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 289 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 5.0 M 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: ceph-osd x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 17 M 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: ceph-prometheus-alerts noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 17 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: ceph-selinux x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 25 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: cryptsetup x86_64 2.7.2-4.el9 baseos 310 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas x86_64 3.0.4-8.el9.0.1 appstream 30 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-netlib x86_64 3.0.4-8.el9.0.1 appstream 3.0 M 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-openblas-openmp x86_64 3.0.4-8.el9.0.1 appstream 15 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: fuse x86_64 2.9.9-17.el9 baseos 78 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 41 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-proxy2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 24 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: libcephsqlite x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 164 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: libconfig x86_64 1.7.2-9.el9 baseos 71 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: libgfortran x86_64 11.5.0-11.el9 baseos 794 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: libnbd x86_64 1.20.3-4.el9 appstream 171 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 159 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: libquadmath x86_64 11.5.0-11.el9 baseos 184 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 44 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 250 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: librgw2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 6.4 M 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 243 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: libxslt x86_64 1.1.34-13.el9_6 appstream 239 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: lmdb-libs x86_64 0.9.29-3.el9 baseos 60 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 282 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: lua x86_64 5.4.4-4.el9 appstream 187 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: lua-devel x86_64 5.4.4-4.el9 crb 21 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: mailcap noarch 2.1.49-5.el9.0.2 baseos 32 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: openblas x86_64 0.3.29-1.el9 appstream 41 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: pciutils x86_64 3.7.0-7.el9 baseos 92 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: perl-Benchmark noarch 1.23-481.1.el9_6 appstream 25 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: protobuf x86_64 3.14.0-17.el9_7 appstream 1.0 M 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-compiler x86_64 3.14.0-17.el9_7 crb 862 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: python3-babel noarch 2.9.1-2.el9 appstream 5.8 M 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-04-01T09:51:45.878 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 45 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 163 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 241 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-cheroot noarch 10.0.1-5.el9 epel 173 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy noarch 18.10.0-5.el9 epel 290 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-cryptography x86_64 36.0.1-5.el9_6 baseos 1.2 M 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-devel x86_64 3.9.23-2.el9 appstream 205 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-isodate noarch 0.6.1-3.el9 epel 56 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-jinja2 noarch 2.11.3-8.el9_5 appstream 228 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 166 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-lxml x86_64 4.6.5-3.el9 appstream 1.2 M 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 32 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-msgpack x86_64 1.0.3-2.el9 epel 86 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy x86_64 1:1.23.5-2.el9_7 appstream 5.8 M 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9_7 appstream 368 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-packaging noarch 20.9-5.el9 appstream 69 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-ply noarch 3.11-14.el9.0.1 baseos 103 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-protobuf noarch 3.14.0-17.el9_7 appstream 237 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1 noarch 0.4.8-7.el9_7 appstream 132 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9_7 appstream 210 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-pycparser noarch 2.20-6.el9 baseos 124 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyparsing noarch 2.4.7-9.el9.0.1 baseos 150 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests noarch 2.25.1-10.el9_6 baseos 115 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 43 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-toml noarch 0.10.2-6.el9.0.1 appstream 44 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-urllib3 noarch 1.26.5-6.el9_7.1 baseos 191 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-xmlsec x86_64 1.3.13-1.el9 epel 48 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: qatlib x86_64 24.09.0-1.el9 appstream 221 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 65 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: socat x86_64 1.7.4.1-8.el9 appstream 299 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: unzip x86_64 6.0-59.el9 baseos 180 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: xmlsec1 x86_64 1.2.29-13.el9 appstream 188 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: xmlsec1-openssl x86_64 1.2.29-13.el9 appstream 89 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 63 k 2026-04-01T09:51:45.879 INFO:teuthology.orchestra.run.vm03.stdout: zip x86_64 3.0-35.el9 baseos 263 k 2026-04-01T09:51:45.880 INFO:teuthology.orchestra.run.vm03.stdout:Installing weak dependencies: 2026-04-01T09:51:45.880 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents noarch 2:20.2.0-8.g0597158282e.el9.clyso ceph-noarch 22 k 2026-04-01T09:51:45.880 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-daemon x86_64 2:20.2.0-8.g0597158282e.el9.clyso ceph 35 k 2026-04-01T09:51:45.880 INFO:teuthology.orchestra.run.vm03.stdout: nvme-cli x86_64 2.13-1.el9 baseos 1.0 M 2026-04-01T09:51:45.880 INFO:teuthology.orchestra.run.vm03.stdout: python3-influxdb noarch 5.3.1-1.el9 epel 139 k 2026-04-01T09:51:45.880 INFO:teuthology.orchestra.run.vm03.stdout: python3-saml noarch 1.16.0-1.el9 epel 125 k 2026-04-01T09:51:45.880 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-service x86_64 24.09.0-1.el9 appstream 36 k 2026-04-01T09:51:45.880 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools x86_64 1:7.2-9.el9 baseos 551 k 2026-04-01T09:51:45.880 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:51:45.880 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-04-01T09:51:45.880 INFO:teuthology.orchestra.run.vm03.stdout:============================================================================================= 2026-04-01T09:51:45.880 INFO:teuthology.orchestra.run.vm03.stdout:Install 150 Packages 2026-04-01T09:51:45.880 INFO:teuthology.orchestra.run.vm03.stdout:Upgrade 2 Packages 2026-04-01T09:51:45.880 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:51:45.880 INFO:teuthology.orchestra.run.vm03.stdout:Total download size: 274 M 2026-04-01T09:51:45.880 INFO:teuthology.orchestra.run.vm03.stdout:Downloading Packages: 2026-04-01T09:51:46.488 INFO:teuthology.orchestra.run.vm07.stdout:(1/152): ceph-20.2.0-8.g0597158282e.el9.clyso.x 30 kB/s | 6.5 kB 00:00 2026-04-01T09:51:46.592 INFO:teuthology.orchestra.run.vm03.stdout:(1/152): ceph-20.2.0-8.g0597158282e.el9.clyso.x 69 kB/s | 6.5 kB 00:00 2026-04-01T09:51:46.703 INFO:teuthology.orchestra.run.vm00.stdout:(1/152): ceph-20.2.0-8.g0597158282e.el9.clyso.x 47 kB/s | 6.5 kB 00:00 2026-04-01T09:51:46.815 INFO:teuthology.orchestra.run.vm07.stdout:(2/152): ceph-fuse-20.2.0-8.g0597158282e.el9.cl 2.8 MB/s | 940 kB 00:00 2026-04-01T09:51:46.846 INFO:teuthology.orchestra.run.vm03.stdout:(2/152): ceph-fuse-20.2.0-8.g0597158282e.el9.cl 3.6 MB/s | 940 kB 00:00 2026-04-01T09:51:46.910 INFO:teuthology.orchestra.run.vm07.stdout:(3/152): ceph-immutable-object-cache-20.2.0-8.g 1.6 MB/s | 154 kB 00:00 2026-04-01T09:51:46.949 INFO:teuthology.orchestra.run.vm03.stdout:(3/152): ceph-immutable-object-cache-20.2.0-8.g 1.5 MB/s | 154 kB 00:00 2026-04-01T09:51:47.190 INFO:teuthology.orchestra.run.vm00.stdout:(2/152): ceph-fuse-20.2.0-8.g0597158282e.el9.cl 1.9 MB/s | 940 kB 00:00 2026-04-01T09:51:47.233 INFO:teuthology.orchestra.run.vm07.stdout:(4/152): ceph-common-20.2.0-8.g0597158282e.el9. 25 MB/s | 24 MB 00:00 2026-04-01T09:51:47.287 INFO:teuthology.orchestra.run.vm00.stdout:(3/152): ceph-immutable-object-cache-20.2.0-8.g 1.6 MB/s | 154 kB 00:00 2026-04-01T09:51:47.292 INFO:teuthology.orchestra.run.vm07.stdout:(5/152): ceph-mds-20.2.0-8.g0597158282e.el9.cly 6.1 MB/s | 2.3 MB 00:00 2026-04-01T09:51:47.392 INFO:teuthology.orchestra.run.vm03.stdout:(4/152): ceph-mds-20.2.0-8.g0597158282e.el9.cly 5.3 MB/s | 2.3 MB 00:00 2026-04-01T09:51:47.404 INFO:teuthology.orchestra.run.vm07.stdout:(6/152): ceph-mgr-20.2.0-8.g0597158282e.el9.cly 5.5 MB/s | 961 kB 00:00 2026-04-01T09:51:47.488 INFO:teuthology.orchestra.run.vm03.stdout:(5/152): ceph-mgr-20.2.0-8.g0597158282e.el9.cly 9.9 MB/s | 961 kB 00:00 2026-04-01T09:51:47.586 INFO:teuthology.orchestra.run.vm00.stdout:(4/152): ceph-mds-20.2.0-8.g0597158282e.el9.cly 7.9 MB/s | 2.3 MB 00:00 2026-04-01T09:51:47.630 INFO:teuthology.orchestra.run.vm03.stdout:(6/152): ceph-common-20.2.0-8.g0597158282e.el9. 21 MB/s | 24 MB 00:01 2026-04-01T09:51:47.651 INFO:teuthology.orchestra.run.vm03.stdout:(7/152): ceph-base-20.2.0-8.g0597158282e.el9.cl 5.1 MB/s | 5.9 MB 00:01 2026-04-01T09:51:47.687 INFO:teuthology.orchestra.run.vm00.stdout:(5/152): ceph-mgr-20.2.0-8.g0597158282e.el9.cly 9.3 MB/s | 961 kB 00:00 2026-04-01T09:51:47.705 INFO:teuthology.orchestra.run.vm07.stdout:(7/152): ceph-mon-20.2.0-8.g0597158282e.el9.cly 12 MB/s | 5.0 MB 00:00 2026-04-01T09:51:47.926 INFO:teuthology.orchestra.run.vm00.stdout:(6/152): ceph-base-20.2.0-8.g0597158282e.el9.cl 4.3 MB/s | 5.9 MB 00:01 2026-04-01T09:51:48.028 INFO:teuthology.orchestra.run.vm07.stdout:(8/152): ceph-base-20.2.0-8.g0597158282e.el9.cl 3.3 MB/s | 5.9 MB 00:01 2026-04-01T09:51:48.475 INFO:teuthology.orchestra.run.vm07.stdout:(9/152): ceph-selinux-20.2.0-8.g0597158282e.el9 56 kB/s | 25 kB 00:00 2026-04-01T09:51:48.499 INFO:teuthology.orchestra.run.vm03.stdout:(8/152): ceph-mon-20.2.0-8.g0597158282e.el9.cly 5.0 MB/s | 5.0 MB 00:01 2026-04-01T09:51:48.520 INFO:teuthology.orchestra.run.vm07.stdout:(10/152): ceph-osd-20.2.0-8.g0597158282e.el9.cl 15 MB/s | 17 MB 00:01 2026-04-01T09:51:48.657 INFO:teuthology.orchestra.run.vm00.stdout:(7/152): ceph-common-20.2.0-8.g0597158282e.el9. 11 MB/s | 24 MB 00:02 2026-04-01T09:51:48.669 INFO:teuthology.orchestra.run.vm03.stdout:(9/152): ceph-selinux-20.2.0-8.g0597158282e.el9 148 kB/s | 25 kB 00:00 2026-04-01T09:51:49.060 INFO:teuthology.orchestra.run.vm03.stdout:(10/152): ceph-osd-20.2.0-8.g0597158282e.el9.cl 12 MB/s | 17 MB 00:01 2026-04-01T09:51:49.100 INFO:teuthology.orchestra.run.vm00.stdout:(8/152): ceph-mon-20.2.0-8.g0597158282e.el9.cly 3.6 MB/s | 5.0 MB 00:01 2026-04-01T09:51:49.149 INFO:teuthology.orchestra.run.vm00.stdout:(9/152): ceph-selinux-20.2.0-8.g0597158282e.el9 518 kB/s | 25 kB 00:00 2026-04-01T09:51:49.236 INFO:teuthology.orchestra.run.vm07.stdout:(11/152): libcephfs-daemon-20.2.0-8.g0597158282 50 kB/s | 35 kB 00:00 2026-04-01T09:51:49.291 INFO:teuthology.orchestra.run.vm07.stdout:(12/152): libcephfs-devel-20.2.0-8.g0597158282e 620 kB/s | 34 kB 00:00 2026-04-01T09:51:49.347 INFO:teuthology.orchestra.run.vm07.stdout:(13/152): libcephfs-proxy2-20.2.0-8.g0597158282 443 kB/s | 24 kB 00:00 2026-04-01T09:51:49.688 INFO:teuthology.orchestra.run.vm07.stdout:(14/152): libcephfs2-20.2.0-8.g0597158282e.el9. 2.5 MB/s | 868 kB 00:00 2026-04-01T09:51:49.770 INFO:teuthology.orchestra.run.vm03.stdout:(11/152): ceph-radosgw-20.2.0-8.g0597158282e.el 11 MB/s | 24 MB 00:02 2026-04-01T09:51:49.821 INFO:teuthology.orchestra.run.vm03.stdout:(12/152): libcephfs-devel-20.2.0-8.g0597158282e 675 kB/s | 34 kB 00:00 2026-04-01T09:51:49.823 INFO:teuthology.orchestra.run.vm07.stdout:(15/152): libcephsqlite-20.2.0-8.g0597158282e.e 1.2 MB/s | 164 kB 00:00 2026-04-01T09:51:49.890 INFO:teuthology.orchestra.run.vm03.stdout:(13/152): libcephfs-proxy2-20.2.0-8.g0597158282 353 kB/s | 24 kB 00:00 2026-04-01T09:51:49.891 INFO:teuthology.orchestra.run.vm07.stdout:(16/152): ceph-radosgw-20.2.0-8.g0597158282e.el 11 MB/s | 24 MB 00:02 2026-04-01T09:51:49.893 INFO:teuthology.orchestra.run.vm07.stdout:(17/152): librados-devel-20.2.0-8.g0597158282e. 1.8 MB/s | 126 kB 00:00 2026-04-01T09:51:49.987 INFO:teuthology.orchestra.run.vm07.stdout:(18/152): libradosstriper1-20.2.0-8.g0597158282 2.5 MB/s | 250 kB 00:00 2026-04-01T09:51:50.125 INFO:teuthology.orchestra.run.vm00.stdout:(10/152): ceph-osd-20.2.0-8.g0597158282e.el9.cl 7.7 MB/s | 17 MB 00:02 2026-04-01T09:51:50.126 INFO:teuthology.orchestra.run.vm03.stdout:(14/152): libcephfs2-20.2.0-8.g0597158282e.el9. 3.6 MB/s | 868 kB 00:00 2026-04-01T09:51:50.226 INFO:teuthology.orchestra.run.vm07.stdout:(19/152): python3-ceph-argparse-20.2.0-8.g05971 189 kB/s | 45 kB 00:00 2026-04-01T09:51:50.229 INFO:teuthology.orchestra.run.vm03.stdout:(15/152): libcephfs-daemon-20.2.0-8.g0597158282 30 kB/s | 35 kB 00:01 2026-04-01T09:51:50.310 INFO:teuthology.orchestra.run.vm07.stdout:(20/152): python3-ceph-common-20.2.0-8.g0597158 1.9 MB/s | 163 kB 00:00 2026-04-01T09:51:50.416 INFO:teuthology.orchestra.run.vm00.stdout:(11/152): libcephfs-daemon-20.2.0-8.g0597158282 122 kB/s | 35 kB 00:00 2026-04-01T09:51:50.473 INFO:teuthology.orchestra.run.vm07.stdout:(21/152): python3-cephfs-20.2.0-8.g0597158282e. 1.0 MB/s | 163 kB 00:00 2026-04-01T09:51:50.493 INFO:teuthology.orchestra.run.vm03.stdout:(16/152): libcephsqlite-20.2.0-8.g0597158282e.e 447 kB/s | 164 kB 00:00 2026-04-01T09:51:50.519 INFO:teuthology.orchestra.run.vm07.stdout:(22/152): python3-rados-20.2.0-8.g0597158282e.e 6.7 MB/s | 317 kB 00:00 2026-04-01T09:51:50.539 INFO:teuthology.orchestra.run.vm03.stdout:(17/152): libradosstriper1-20.2.0-8.g0597158282 5.3 MB/s | 250 kB 00:00 2026-04-01T09:51:50.564 INFO:teuthology.orchestra.run.vm00.stdout:(12/152): libcephfs-devel-20.2.0-8.g0597158282e 233 kB/s | 34 kB 00:00 2026-04-01T09:51:50.593 INFO:teuthology.orchestra.run.vm07.stdout:(23/152): librgw2-20.2.0-8.g0597158282e.el9.cly 9.1 MB/s | 6.4 MB 00:00 2026-04-01T09:51:50.691 INFO:teuthology.orchestra.run.vm00.stdout:(13/152): libcephfs-proxy2-20.2.0-8.g0597158282 191 kB/s | 24 kB 00:00 2026-04-01T09:51:50.695 INFO:teuthology.orchestra.run.vm07.stdout:(24/152): python3-rbd-20.2.0-8.g0597158282e.el9 1.7 MB/s | 304 kB 00:00 2026-04-01T09:51:50.728 INFO:teuthology.orchestra.run.vm07.stdout:(25/152): python3-rgw-20.2.0-8.g0597158282e.el9 737 kB/s | 99 kB 00:00 2026-04-01T09:51:50.736 INFO:teuthology.orchestra.run.vm03.stdout:(18/152): librados-devel-20.2.0-8.g0597158282e. 248 kB/s | 126 kB 00:00 2026-04-01T09:51:50.774 INFO:teuthology.orchestra.run.vm00.stdout:(14/152): libcephfs2-20.2.0-8.g0597158282e.el9. 10 MB/s | 868 kB 00:00 2026-04-01T09:51:50.837 INFO:teuthology.orchestra.run.vm07.stdout:(26/152): rbd-fuse-20.2.0-8.g0597158282e.el9.cl 647 kB/s | 91 kB 00:00 2026-04-01T09:51:50.855 INFO:teuthology.orchestra.run.vm00.stdout:(15/152): libcephsqlite-20.2.0-8.g0597158282e.e 2.0 MB/s | 164 kB 00:00 2026-04-01T09:51:50.884 INFO:teuthology.orchestra.run.vm07.stdout:(27/152): rbd-nbd-20.2.0-8.g0597158282e.el9.cly 3.8 MB/s | 180 kB 00:00 2026-04-01T09:51:50.932 INFO:teuthology.orchestra.run.vm00.stdout:(16/152): librados-devel-20.2.0-8.g0597158282e. 1.6 MB/s | 126 kB 00:00 2026-04-01T09:51:50.951 INFO:teuthology.orchestra.run.vm03.stdout:(19/152): librgw2-20.2.0-8.g0597158282e.el9.cly 15 MB/s | 6.4 MB 00:00 2026-04-01T09:51:50.957 INFO:teuthology.orchestra.run.vm03.stdout:(20/152): python3-ceph-argparse-20.2.0-8.g05971 204 kB/s | 45 kB 00:00 2026-04-01T09:51:50.971 INFO:teuthology.orchestra.run.vm00.stdout:(17/152): libradosstriper1-20.2.0-8.g0597158282 6.4 MB/s | 250 kB 00:00 2026-04-01T09:51:50.981 INFO:teuthology.orchestra.run.vm07.stdout:(28/152): ceph-grafana-dashboards-20.2.0-8.g059 445 kB/s | 43 kB 00:00 2026-04-01T09:51:50.991 INFO:teuthology.orchestra.run.vm03.stdout:(21/152): python3-ceph-common-20.2.0-8.g0597158 3.9 MB/s | 163 kB 00:00 2026-04-01T09:51:51.027 INFO:teuthology.orchestra.run.vm07.stdout:(29/152): ceph-mgr-cephadm-20.2.0-8.g0597158282 3.7 MB/s | 173 kB 00:00 2026-04-01T09:51:51.056 INFO:teuthology.orchestra.run.vm00.stdout:(18/152): ceph-radosgw-20.2.0-8.g0597158282e.el 9.9 MB/s | 24 MB 00:02 2026-04-01T09:51:51.084 INFO:teuthology.orchestra.run.vm03.stdout:(22/152): python3-cephfs-20.2.0-8.g0597158282e. 1.3 MB/s | 163 kB 00:00 2026-04-01T09:51:51.094 INFO:teuthology.orchestra.run.vm03.stdout:(23/152): python3-rados-20.2.0-8.g0597158282e.e 3.0 MB/s | 317 kB 00:00 2026-04-01T09:51:51.099 INFO:teuthology.orchestra.run.vm00.stdout:(19/152): python3-ceph-argparse-20.2.0-8.g05971 1.0 MB/s | 45 kB 00:00 2026-04-01T09:51:51.128 INFO:teuthology.orchestra.run.vm03.stdout:(24/152): python3-rgw-20.2.0-8.g0597158282e.el9 2.9 MB/s | 99 kB 00:00 2026-04-01T09:51:51.147 INFO:teuthology.orchestra.run.vm00.stdout:(20/152): python3-ceph-common-20.2.0-8.g0597158 3.4 MB/s | 163 kB 00:00 2026-04-01T09:51:51.175 INFO:teuthology.orchestra.run.vm03.stdout:(25/152): rbd-fuse-20.2.0-8.g0597158282e.el9.cl 1.9 MB/s | 91 kB 00:00 2026-04-01T09:51:51.204 INFO:teuthology.orchestra.run.vm07.stdout:(30/152): rbd-mirror-20.2.0-8.g0597158282e.el9. 6.1 MB/s | 2.9 MB 00:00 2026-04-01T09:51:51.230 INFO:teuthology.orchestra.run.vm00.stdout:(21/152): python3-cephfs-20.2.0-8.g0597158282e. 1.9 MB/s | 163 kB 00:00 2026-04-01T09:51:51.230 INFO:teuthology.orchestra.run.vm03.stdout:(26/152): python3-rbd-20.2.0-8.g0597158282e.el9 2.0 MB/s | 304 kB 00:00 2026-04-01T09:51:51.288 INFO:teuthology.orchestra.run.vm03.stdout:(27/152): rbd-nbd-20.2.0-8.g0597158282e.el9.cly 3.0 MB/s | 180 kB 00:00 2026-04-01T09:51:51.317 INFO:teuthology.orchestra.run.vm00.stdout:(22/152): python3-rados-20.2.0-8.g0597158282e.e 3.6 MB/s | 317 kB 00:00 2026-04-01T09:51:51.323 INFO:teuthology.orchestra.run.vm03.stdout:(28/152): ceph-grafana-dashboards-20.2.0-8.g059 1.2 MB/s | 43 kB 00:00 2026-04-01T09:51:51.387 INFO:teuthology.orchestra.run.vm00.stdout:(23/152): python3-rbd-20.2.0-8.g0597158282e.el9 4.2 MB/s | 304 kB 00:00 2026-04-01T09:51:51.441 INFO:teuthology.orchestra.run.vm03.stdout:(29/152): ceph-mgr-cephadm-20.2.0-8.g0597158282 1.4 MB/s | 173 kB 00:00 2026-04-01T09:51:51.447 INFO:teuthology.orchestra.run.vm00.stdout:(24/152): python3-rgw-20.2.0-8.g0597158282e.el9 1.6 MB/s | 99 kB 00:00 2026-04-01T09:51:51.500 INFO:teuthology.orchestra.run.vm03.stdout:(30/152): rbd-mirror-20.2.0-8.g0597158282e.el9. 9.0 MB/s | 2.9 MB 00:00 2026-04-01T09:51:51.509 INFO:teuthology.orchestra.run.vm00.stdout:(25/152): rbd-fuse-20.2.0-8.g0597158282e.el9.cl 1.4 MB/s | 91 kB 00:00 2026-04-01T09:51:51.641 INFO:teuthology.orchestra.run.vm00.stdout:(26/152): librgw2-20.2.0-8.g0597158282e.el9.cly 9.5 MB/s | 6.4 MB 00:00 2026-04-01T09:51:51.737 INFO:teuthology.orchestra.run.vm07.stdout:(31/152): ceph-mgr-dashboard-20.2.0-8.g05971582 21 MB/s | 15 MB 00:00 2026-04-01T09:51:51.836 INFO:teuthology.orchestra.run.vm07.stdout:(32/152): ceph-mgr-k8sevents-20.2.0-8.g05971582 222 kB/s | 22 kB 00:00 2026-04-01T09:51:52.006 INFO:teuthology.orchestra.run.vm07.stdout:(33/152): ceph-mgr-modules-core-20.2.0-8.g05971 1.7 MB/s | 289 kB 00:00 2026-04-01T09:51:52.040 INFO:teuthology.orchestra.run.vm03.stdout:(31/152): ceph-mgr-dashboard-20.2.0-8.g05971582 25 MB/s | 15 MB 00:00 2026-04-01T09:51:52.077 INFO:teuthology.orchestra.run.vm07.stdout:(34/152): ceph-mgr-rook-20.2.0-8.g0597158282e.e 711 kB/s | 50 kB 00:00 2026-04-01T09:51:52.126 INFO:teuthology.orchestra.run.vm07.stdout:(35/152): ceph-prometheus-alerts-20.2.0-8.g0597 353 kB/s | 17 kB 00:00 2026-04-01T09:51:52.148 INFO:teuthology.orchestra.run.vm03.stdout:(32/152): ceph-mgr-k8sevents-20.2.0-8.g05971582 205 kB/s | 22 kB 00:00 2026-04-01T09:51:52.160 INFO:teuthology.orchestra.run.vm07.stdout:(36/152): ceph-mgr-diskprediction-local-20.2.0- 7.8 MB/s | 7.4 MB 00:00 2026-04-01T09:51:52.184 INFO:teuthology.orchestra.run.vm07.stdout:(37/152): ceph-volume-20.2.0-8.g0597158282e.el9 5.0 MB/s | 297 kB 00:00 2026-04-01T09:51:52.204 INFO:teuthology.orchestra.run.vm07.stdout:(38/152): abseil-cpp-20211102.0-4.el9.x86_64.rp 28 MB/s | 551 kB 00:00 2026-04-01T09:51:52.213 INFO:teuthology.orchestra.run.vm07.stdout:(39/152): gperftools-libs-2.9.1-3.el9.x86_64.rp 38 MB/s | 308 kB 00:00 2026-04-01T09:51:52.217 INFO:teuthology.orchestra.run.vm07.stdout:(40/152): grpc-data-1.46.7-10.el9.noarch.rpm 4.5 MB/s | 19 kB 00:00 2026-04-01T09:51:52.218 INFO:teuthology.orchestra.run.vm00.stdout:(27/152): rbd-mirror-20.2.0-8.g0597158282e.el9. 4.1 MB/s | 2.9 MB 00:00 2026-04-01T09:51:52.313 INFO:teuthology.orchestra.run.vm00.stdout:(28/152): ceph-grafana-dashboards-20.2.0-8.g059 458 kB/s | 43 kB 00:00 2026-04-01T09:51:52.329 INFO:teuthology.orchestra.run.vm07.stdout:(41/152): cephadm-20.2.0-8.g0597158282e.el9.cly 5.8 MB/s | 1.0 MB 00:00 2026-04-01T09:51:52.341 INFO:teuthology.orchestra.run.vm07.stdout:(42/152): libarrow-doc-9.0.0-15.el9.noarch.rpm 2.1 MB/s | 25 kB 00:00 2026-04-01T09:51:52.355 INFO:teuthology.orchestra.run.vm07.stdout:(43/152): liboath-2.6.12-1.el9.x86_64.rpm 3.5 MB/s | 49 kB 00:00 2026-04-01T09:51:52.363 INFO:teuthology.orchestra.run.vm07.stdout:(44/152): libunwind-1.6.2-1.el9.x86_64.rpm 9.3 MB/s | 67 kB 00:00 2026-04-01T09:51:52.375 INFO:teuthology.orchestra.run.vm00.stdout:(29/152): ceph-mgr-cephadm-20.2.0-8.g0597158282 2.7 MB/s | 173 kB 00:00 2026-04-01T09:51:52.377 INFO:teuthology.orchestra.run.vm07.stdout:(45/152): luarocks-3.9.2-5.el9.noarch.rpm 10 MB/s | 151 kB 00:00 2026-04-01T09:51:52.406 INFO:teuthology.orchestra.run.vm07.stdout:(46/152): parquet-libs-9.0.0-15.el9.x86_64.rpm 30 MB/s | 838 kB 00:00 2026-04-01T09:51:52.420 INFO:teuthology.orchestra.run.vm07.stdout:(47/152): libarrow-9.0.0-15.el9.x86_64.rpm 22 MB/s | 4.4 MB 00:00 2026-04-01T09:51:52.431 INFO:teuthology.orchestra.run.vm07.stdout:(48/152): python3-asyncssh-2.13.2-5.el9.noarch. 22 MB/s | 548 kB 00:00 2026-04-01T09:51:52.433 INFO:teuthology.orchestra.run.vm07.stdout:(49/152): python3-autocommand-2.2.2-8.el9.noarc 2.3 MB/s | 29 kB 00:00 2026-04-01T09:51:52.434 INFO:teuthology.orchestra.run.vm03.stdout:(33/152): ceph-mgr-modules-core-20.2.0-8.g05971 1.0 MB/s | 289 kB 00:00 2026-04-01T09:51:52.434 INFO:teuthology.orchestra.run.vm07.stdout:(50/152): python3-backports-tarfile-1.2.0-1.el9 17 MB/s | 60 kB 00:00 2026-04-01T09:51:52.436 INFO:teuthology.orchestra.run.vm07.stdout:(51/152): python3-bcrypt-3.2.2-1.el9.x86_64.rpm 14 MB/s | 43 kB 00:00 2026-04-01T09:51:52.450 INFO:teuthology.orchestra.run.vm07.stdout:(52/152): python3-cachetools-4.2.4-1.el9.noarch 2.0 MB/s | 32 kB 00:00 2026-04-01T09:51:52.451 INFO:teuthology.orchestra.run.vm07.stdout:(53/152): python3-certifi-2023.05.07-4.el9.noar 987 kB/s | 14 kB 00:00 2026-04-01T09:51:52.467 INFO:teuthology.orchestra.run.vm07.stdout:(54/152): python3-cherrypy-18.10.0-5.el9.noarch 17 MB/s | 290 kB 00:00 2026-04-01T09:51:52.471 INFO:teuthology.orchestra.run.vm07.stdout:(55/152): python3-cheroot-10.0.1-5.el9.noarch.r 8.1 MB/s | 173 kB 00:00 2026-04-01T09:51:52.484 INFO:teuthology.orchestra.run.vm07.stdout:(56/152): python3-google-auth-2.45.0-1.el9.noar 15 MB/s | 254 kB 00:00 2026-04-01T09:51:52.495 INFO:teuthology.orchestra.run.vm03.stdout:(34/152): ceph-mgr-rook-20.2.0-8.g0597158282e.e 837 kB/s | 50 kB 00:00 2026-04-01T09:51:52.498 INFO:teuthology.orchestra.run.vm07.stdout:(57/152): python3-grpcio-tools-1.46.7-10.el9.x8 10 MB/s | 144 kB 00:00 2026-04-01T09:51:52.509 INFO:teuthology.orchestra.run.vm07.stdout:(58/152): python3-influxdb-5.3.1-1.el9.noarch.r 12 MB/s | 139 kB 00:00 2026-04-01T09:51:52.526 INFO:teuthology.orchestra.run.vm07.stdout:(59/152): python3-isodate-0.6.1-3.el9.noarch.rp 3.2 MB/s | 56 kB 00:00 2026-04-01T09:51:52.528 INFO:teuthology.orchestra.run.vm00.stdout:(30/152): rbd-nbd-20.2.0-8.g0597158282e.el9.cly 202 kB/s | 180 kB 00:00 2026-04-01T09:51:52.532 INFO:teuthology.orchestra.run.vm07.stdout:(60/152): python3-jaraco-8.2.1-3.el9.noarch.rpm 1.8 MB/s | 11 kB 00:00 2026-04-01T09:51:52.536 INFO:teuthology.orchestra.run.vm03.stdout:(35/152): ceph-prometheus-alerts-20.2.0-8.g0597 415 kB/s | 17 kB 00:00 2026-04-01T09:51:52.537 INFO:teuthology.orchestra.run.vm07.stdout:(61/152): python3-jaraco-classes-3.2.1-5.el9.no 4.1 MB/s | 18 kB 00:00 2026-04-01T09:51:52.541 INFO:teuthology.orchestra.run.vm07.stdout:(62/152): python3-jaraco-collections-3.0.0-8.el 5.8 MB/s | 23 kB 00:00 2026-04-01T09:51:52.548 INFO:teuthology.orchestra.run.vm07.stdout:(63/152): python3-grpcio-1.46.7-10.el9.x86_64.r 27 MB/s | 2.0 MB 00:00 2026-04-01T09:51:52.549 INFO:teuthology.orchestra.run.vm07.stdout:(64/152): python3-jaraco-context-6.0.1-3.el9.no 2.6 MB/s | 20 kB 00:00 2026-04-01T09:51:52.551 INFO:teuthology.orchestra.run.vm07.stdout:(65/152): python3-jaraco-functools-3.5.0-2.el9. 7.0 MB/s | 19 kB 00:00 2026-04-01T09:51:52.562 INFO:teuthology.orchestra.run.vm07.stdout:(66/152): python3-jaraco-text-4.0.0-2.el9.noarc 2.0 MB/s | 26 kB 00:00 2026-04-01T09:51:52.568 INFO:teuthology.orchestra.run.vm07.stdout:(67/152): python3-more-itertools-8.12.0-2.el9.n 14 MB/s | 79 kB 00:00 2026-04-01T09:51:52.573 INFO:teuthology.orchestra.run.vm07.stdout:(68/152): python3-msgpack-1.0.3-2.el9.x86_64.rp 20 MB/s | 86 kB 00:00 2026-04-01T09:51:52.576 INFO:teuthology.orchestra.run.vm07.stdout:(69/152): python3-natsort-7.1.1-5.el9.noarch.rp 16 MB/s | 58 kB 00:00 2026-04-01T09:51:52.580 INFO:teuthology.orchestra.run.vm07.stdout:(70/152): python3-portend-3.1.0-2.el9.noarch.rp 4.7 MB/s | 16 kB 00:00 2026-04-01T09:51:52.587 INFO:teuthology.orchestra.run.vm07.stdout:(71/152): python3-pyOpenSSL-21.0.0-1.el9.noarch 13 MB/s | 90 kB 00:00 2026-04-01T09:51:52.595 INFO:teuthology.orchestra.run.vm07.stdout:(72/152): python3-repoze-lru-0.7-16.el9.noarch. 4.3 MB/s | 31 kB 00:00 2026-04-01T09:51:52.599 INFO:teuthology.orchestra.run.vm07.stdout:(73/152): python3-kubernetes-26.1.0-3.el9.noarc 21 MB/s | 1.0 MB 00:00 2026-04-01T09:51:52.604 INFO:teuthology.orchestra.run.vm07.stdout:(74/152): python3-routes-2.5.1-5.el9.noarch.rpm 20 MB/s | 188 kB 00:00 2026-04-01T09:51:52.606 INFO:teuthology.orchestra.run.vm07.stdout:(75/152): python3-rsa-4.9-2.el9.noarch.rpm 8.6 MB/s | 59 kB 00:00 2026-04-01T09:51:52.611 INFO:teuthology.orchestra.run.vm07.stdout:(76/152): python3-saml-1.16.0-1.el9.noarch.rpm 20 MB/s | 125 kB 00:00 2026-04-01T09:51:52.612 INFO:teuthology.orchestra.run.vm07.stdout:(77/152): python3-tempora-5.0.0-2.el9.noarch.rp 5.7 MB/s | 36 kB 00:00 2026-04-01T09:51:52.617 INFO:teuthology.orchestra.run.vm07.stdout:(78/152): python3-typing-extensions-4.15.0-1.el 16 MB/s | 86 kB 00:00 2026-04-01T09:51:52.619 INFO:teuthology.orchestra.run.vm07.stdout:(79/152): python3-websocket-client-1.2.3-2.el9. 14 MB/s | 90 kB 00:00 2026-04-01T09:51:52.622 INFO:teuthology.orchestra.run.vm07.stdout:(80/152): python3-xmlsec-1.3.13-1.el9.x86_64.rp 8.9 MB/s | 48 kB 00:00 2026-04-01T09:51:52.624 INFO:teuthology.orchestra.run.vm07.stdout:(81/152): python3-xmltodict-0.12.0-15.el9.noarc 4.3 MB/s | 22 kB 00:00 2026-04-01T09:51:52.628 INFO:teuthology.orchestra.run.vm07.stdout:(82/152): python3-zc-lockfile-2.0-10.el9.noarch 3.8 MB/s | 20 kB 00:00 2026-04-01T09:51:52.634 INFO:teuthology.orchestra.run.vm07.stdout:(83/152): s3cmd-2.4.0-1.el9.noarch.rpm 33 MB/s | 206 kB 00:00 2026-04-01T09:51:52.636 INFO:teuthology.orchestra.run.vm07.stdout:(84/152): re2-20211101-20.el9.x86_64.rpm 16 MB/s | 191 kB 00:00 2026-04-01T09:51:52.686 INFO:teuthology.orchestra.run.vm07.stdout:(85/152): thrift-0.15.0-4.el9.x86_64.rpm 31 MB/s | 1.6 MB 00:00 2026-04-01T09:51:52.776 INFO:teuthology.orchestra.run.vm07.stdout:(86/152): bzip2-1.0.8-10.el9_5.x86_64.rpm 369 kB/s | 51 kB 00:00 2026-04-01T09:51:52.814 INFO:teuthology.orchestra.run.vm03.stdout:(36/152): ceph-volume-20.2.0-8.g0597158282e.el9 1.0 MB/s | 297 kB 00:00 2026-04-01T09:51:52.838 INFO:teuthology.orchestra.run.vm03.stdout:(37/152): ceph-mgr-diskprediction-local-20.2.0- 5.5 MB/s | 7.4 MB 00:01 2026-04-01T09:51:52.844 INFO:teuthology.orchestra.run.vm07.stdout:(87/152): c-ares-1.19.1-2.el9_4.x86_64.rpm 696 kB/s | 110 kB 00:00 2026-04-01T09:51:52.853 INFO:teuthology.orchestra.run.vm03.stdout:(38/152): abseil-cpp-20211102.0-4.el9.x86_64.rp 36 MB/s | 551 kB 00:00 2026-04-01T09:51:52.855 INFO:teuthology.orchestra.run.vm07.stdout:(88/152): cryptsetup-2.7.2-4.el9.x86_64.rpm 3.8 MB/s | 310 kB 00:00 2026-04-01T09:51:52.860 INFO:teuthology.orchestra.run.vm03.stdout:(39/152): gperftools-libs-2.9.1-3.el9.x86_64.rp 46 MB/s | 308 kB 00:00 2026-04-01T09:51:52.863 INFO:teuthology.orchestra.run.vm03.stdout:(40/152): grpc-data-1.46.7-10.el9.noarch.rpm 5.2 MB/s | 19 kB 00:00 2026-04-01T09:51:52.872 INFO:teuthology.orchestra.run.vm07.stdout:(89/152): fuse-2.9.9-17.el9.x86_64.rpm 2.7 MB/s | 78 kB 00:00 2026-04-01T09:51:52.882 INFO:teuthology.orchestra.run.vm07.stdout:(90/152): ledmon-libs-1.1.0-3.el9.x86_64.rpm 1.5 MB/s | 41 kB 00:00 2026-04-01T09:51:52.902 INFO:teuthology.orchestra.run.vm07.stdout:(91/152): libconfig-1.7.2-9.el9.x86_64.rpm 2.3 MB/s | 71 kB 00:00 2026-04-01T09:51:52.963 INFO:teuthology.orchestra.run.vm07.stdout:(92/152): libquadmath-11.5.0-11.el9.x86_64.rpm 3.0 MB/s | 184 kB 00:00 2026-04-01T09:51:52.992 INFO:teuthology.orchestra.run.vm07.stdout:(93/152): lmdb-libs-0.9.29-3.el9.x86_64.rpm 2.0 MB/s | 60 kB 00:00 2026-04-01T09:51:53.019 INFO:teuthology.orchestra.run.vm07.stdout:(94/152): mailcap-2.1.49-5.el9.0.2.noarch.rpm 1.2 MB/s | 32 kB 00:00 2026-04-01T09:51:53.019 INFO:teuthology.orchestra.run.vm03.stdout:(41/152): libarrow-9.0.0-15.el9.x86_64.rpm 28 MB/s | 4.4 MB 00:00 2026-04-01T09:51:53.022 INFO:teuthology.orchestra.run.vm03.stdout:(42/152): libarrow-doc-9.0.0-15.el9.noarch.rpm 9.8 MB/s | 25 kB 00:00 2026-04-01T09:51:53.025 INFO:teuthology.orchestra.run.vm03.stdout:(43/152): liboath-2.6.12-1.el9.x86_64.rpm 16 MB/s | 49 kB 00:00 2026-04-01T09:51:53.029 INFO:teuthology.orchestra.run.vm03.stdout:(44/152): libunwind-1.6.2-1.el9.x86_64.rpm 21 MB/s | 67 kB 00:00 2026-04-01T09:51:53.033 INFO:teuthology.orchestra.run.vm07.stdout:(95/152): libgfortran-11.5.0-11.el9.x86_64.rpm 5.1 MB/s | 794 kB 00:00 2026-04-01T09:51:53.033 INFO:teuthology.orchestra.run.vm03.stdout:(45/152): luarocks-3.9.2-5.el9.noarch.rpm 33 MB/s | 151 kB 00:00 2026-04-01T09:51:53.049 INFO:teuthology.orchestra.run.vm03.stdout:(46/152): parquet-libs-9.0.0-15.el9.x86_64.rpm 54 MB/s | 838 kB 00:00 2026-04-01T09:51:53.059 INFO:teuthology.orchestra.run.vm03.stdout:(47/152): python3-asyncssh-2.13.2-5.el9.noarch. 58 MB/s | 548 kB 00:00 2026-04-01T09:51:53.061 INFO:teuthology.orchestra.run.vm03.stdout:(48/152): python3-autocommand-2.2.2-8.el9.noarc 12 MB/s | 29 kB 00:00 2026-04-01T09:51:53.063 INFO:teuthology.orchestra.run.vm07.stdout:(96/152): pciutils-3.7.0-7.el9.x86_64.rpm 3.1 MB/s | 92 kB 00:00 2026-04-01T09:51:53.064 INFO:teuthology.orchestra.run.vm03.stdout:(49/152): python3-backports-tarfile-1.2.0-1.el9 22 MB/s | 60 kB 00:00 2026-04-01T09:51:53.067 INFO:teuthology.orchestra.run.vm03.stdout:(50/152): python3-bcrypt-3.2.2-1.el9.x86_64.rpm 17 MB/s | 43 kB 00:00 2026-04-01T09:51:53.070 INFO:teuthology.orchestra.run.vm03.stdout:(51/152): python3-cachetools-4.2.4-1.el9.noarch 11 MB/s | 32 kB 00:00 2026-04-01T09:51:53.072 INFO:teuthology.orchestra.run.vm03.stdout:(52/152): python3-certifi-2023.05.07-4.el9.noar 6.2 MB/s | 14 kB 00:00 2026-04-01T09:51:53.077 INFO:teuthology.orchestra.run.vm03.stdout:(53/152): python3-cheroot-10.0.1-5.el9.noarch.r 39 MB/s | 173 kB 00:00 2026-04-01T09:51:53.083 INFO:teuthology.orchestra.run.vm03.stdout:(54/152): python3-cherrypy-18.10.0-5.el9.noarch 48 MB/s | 290 kB 00:00 2026-04-01T09:51:53.089 INFO:teuthology.orchestra.run.vm03.stdout:(55/152): python3-google-auth-2.45.0-1.el9.noar 41 MB/s | 254 kB 00:00 2026-04-01T09:51:53.118 INFO:teuthology.orchestra.run.vm07.stdout:(97/152): python3-cffi-1.14.5-5.el9.x86_64.rpm 4.3 MB/s | 241 kB 00:00 2026-04-01T09:51:53.124 INFO:teuthology.orchestra.run.vm03.stdout:(56/152): python3-grpcio-1.46.7-10.el9.x86_64.r 59 MB/s | 2.0 MB 00:00 2026-04-01T09:51:53.130 INFO:teuthology.orchestra.run.vm03.stdout:(57/152): python3-grpcio-tools-1.46.7-10.el9.x8 23 MB/s | 144 kB 00:00 2026-04-01T09:51:53.135 INFO:teuthology.orchestra.run.vm07.stdout:(98/152): nvme-cli-2.13-1.el9.x86_64.rpm 8.5 MB/s | 1.0 MB 00:00 2026-04-01T09:51:53.138 INFO:teuthology.orchestra.run.vm03.stdout:(58/152): python3-influxdb-5.3.1-1.el9.noarch.r 18 MB/s | 139 kB 00:00 2026-04-01T09:51:53.141 INFO:teuthology.orchestra.run.vm03.stdout:(59/152): python3-isodate-0.6.1-3.el9.noarch.rp 16 MB/s | 56 kB 00:00 2026-04-01T09:51:53.144 INFO:teuthology.orchestra.run.vm03.stdout:(60/152): python3-jaraco-8.2.1-3.el9.noarch.rpm 4.7 MB/s | 11 kB 00:00 2026-04-01T09:51:53.147 INFO:teuthology.orchestra.run.vm03.stdout:(61/152): python3-jaraco-classes-3.2.1-5.el9.no 5.4 MB/s | 18 kB 00:00 2026-04-01T09:51:53.150 INFO:teuthology.orchestra.run.vm03.stdout:(62/152): python3-jaraco-collections-3.0.0-8.el 7.5 MB/s | 23 kB 00:00 2026-04-01T09:51:53.153 INFO:teuthology.orchestra.run.vm03.stdout:(63/152): python3-jaraco-context-6.0.1-3.el9.no 8.4 MB/s | 20 kB 00:00 2026-04-01T09:51:53.155 INFO:teuthology.orchestra.run.vm03.stdout:(64/152): python3-jaraco-functools-3.5.0-2.el9. 7.7 MB/s | 19 kB 00:00 2026-04-01T09:51:53.158 INFO:teuthology.orchestra.run.vm03.stdout:(65/152): python3-jaraco-text-4.0.0-2.el9.noarc 10 MB/s | 26 kB 00:00 2026-04-01T09:51:53.165 INFO:teuthology.orchestra.run.vm07.stdout:(99/152): python3-ply-3.11-14.el9.0.1.noarch.rp 3.4 MB/s | 103 kB 00:00 2026-04-01T09:51:53.187 INFO:teuthology.orchestra.run.vm03.stdout:(66/152): python3-kubernetes-26.1.0-3.el9.noarc 36 MB/s | 1.0 MB 00:00 2026-04-01T09:51:53.191 INFO:teuthology.orchestra.run.vm03.stdout:(67/152): python3-more-itertools-8.12.0-2.el9.n 22 MB/s | 79 kB 00:00 2026-04-01T09:51:53.194 INFO:teuthology.orchestra.run.vm07.stdout:(100/152): python3-pycparser-2.20-6.el9.noarch. 4.3 MB/s | 124 kB 00:00 2026-04-01T09:51:53.195 INFO:teuthology.orchestra.run.vm03.stdout:(68/152): python3-msgpack-1.0.3-2.el9.x86_64.rp 21 MB/s | 86 kB 00:00 2026-04-01T09:51:53.204 INFO:teuthology.orchestra.run.vm00.stdout:(31/152): ceph-mgr-diskprediction-local-20.2.0- 11 MB/s | 7.4 MB 00:00 2026-04-01T09:51:53.205 INFO:teuthology.orchestra.run.vm03.stdout:(69/152): python3-natsort-7.1.1-5.el9.noarch.rp 6.4 MB/s | 58 kB 00:00 2026-04-01T09:51:53.207 INFO:teuthology.orchestra.run.vm03.stdout:(70/152): python3-portend-3.1.0-2.el9.noarch.rp 6.6 MB/s | 16 kB 00:00 2026-04-01T09:51:53.210 INFO:teuthology.orchestra.run.vm03.stdout:(71/152): cephadm-20.2.0-8.g0597158282e.el9.cly 2.5 MB/s | 1.0 MB 00:00 2026-04-01T09:51:53.212 INFO:teuthology.orchestra.run.vm03.stdout:(72/152): python3-pyOpenSSL-21.0.0-1.el9.noarch 18 MB/s | 90 kB 00:00 2026-04-01T09:51:53.217 INFO:teuthology.orchestra.run.vm03.stdout:(73/152): python3-repoze-lru-0.7-16.el9.noarch. 4.9 MB/s | 31 kB 00:00 2026-04-01T09:51:53.219 INFO:teuthology.orchestra.run.vm03.stdout:(74/152): python3-routes-2.5.1-5.el9.noarch.rpm 27 MB/s | 188 kB 00:00 2026-04-01T09:51:53.221 INFO:teuthology.orchestra.run.vm03.stdout:(75/152): python3-rsa-4.9-2.el9.noarch.rpm 14 MB/s | 59 kB 00:00 2026-04-01T09:51:53.223 INFO:teuthology.orchestra.run.vm07.stdout:(101/152): python3-pyparsing-2.4.7-9.el9.0.1.no 5.0 MB/s | 150 kB 00:00 2026-04-01T09:51:53.224 INFO:teuthology.orchestra.run.vm03.stdout:(76/152): python3-saml-1.16.0-1.el9.noarch.rpm 25 MB/s | 125 kB 00:00 2026-04-01T09:51:53.225 INFO:teuthology.orchestra.run.vm03.stdout:(77/152): python3-tempora-5.0.0-2.el9.noarch.rp 8.7 MB/s | 36 kB 00:00 2026-04-01T09:51:53.230 INFO:teuthology.orchestra.run.vm03.stdout:(78/152): python3-websocket-client-1.2.3-2.el9. 23 MB/s | 90 kB 00:00 2026-04-01T09:51:53.231 INFO:teuthology.orchestra.run.vm03.stdout:(79/152): python3-typing-extensions-4.15.0-1.el 13 MB/s | 86 kB 00:00 2026-04-01T09:51:53.233 INFO:teuthology.orchestra.run.vm03.stdout:(80/152): python3-xmlsec-1.3.13-1.el9.x86_64.rp 16 MB/s | 48 kB 00:00 2026-04-01T09:51:53.234 INFO:teuthology.orchestra.run.vm03.stdout:(81/152): python3-xmltodict-0.12.0-15.el9.noarc 9.1 MB/s | 22 kB 00:00 2026-04-01T09:51:53.235 INFO:teuthology.orchestra.run.vm03.stdout:(82/152): python3-zc-lockfile-2.0-10.el9.noarch 8.0 MB/s | 20 kB 00:00 2026-04-01T09:51:53.236 INFO:teuthology.orchestra.run.vm00.stdout:(32/152): ceph-mgr-k8sevents-20.2.0-8.g05971582 710 kB/s | 22 kB 00:00 2026-04-01T09:51:53.242 INFO:teuthology.orchestra.run.vm03.stdout:(83/152): s3cmd-2.4.0-1.el9.noarch.rpm 34 MB/s | 206 kB 00:00 2026-04-01T09:51:53.244 INFO:teuthology.orchestra.run.vm03.stdout:(84/152): re2-20211101-20.el9.x86_64.rpm 20 MB/s | 191 kB 00:00 2026-04-01T09:51:53.254 INFO:teuthology.orchestra.run.vm07.stdout:(102/152): python3-requests-2.25.1-10.el9_6.noa 3.7 MB/s | 115 kB 00:00 2026-04-01T09:51:53.273 INFO:teuthology.orchestra.run.vm03.stdout:(85/152): thrift-0.15.0-4.el9.x86_64.rpm 51 MB/s | 1.6 MB 00:00 2026-04-01T09:51:53.279 INFO:teuthology.orchestra.run.vm07.stdout:(103/152): python3-cryptography-36.0.1-5.el9_6. 7.2 MB/s | 1.2 MB 00:00 2026-04-01T09:51:53.283 INFO:teuthology.orchestra.run.vm07.stdout:(104/152): python3-urllib3-1.26.5-6.el9_7.1.noa 6.4 MB/s | 191 kB 00:00 2026-04-01T09:51:53.350 INFO:teuthology.orchestra.run.vm00.stdout:(33/152): ceph-mgr-modules-core-20.2.0-8.g05971 2.5 MB/s | 289 kB 00:00 2026-04-01T09:51:53.350 INFO:teuthology.orchestra.run.vm07.stdout:(105/152): unzip-6.0-59.el9.x86_64.rpm 2.6 MB/s | 180 kB 00:00 2026-04-01T09:51:53.358 INFO:teuthology.orchestra.run.vm07.stdout:(106/152): smartmontools-7.2-9.el9.x86_64.rpm 6.8 MB/s | 551 kB 00:00 2026-04-01T09:51:53.387 INFO:teuthology.orchestra.run.vm07.stdout:(107/152): boost-program-options-1.75.0-13.el9_ 3.6 MB/s | 104 kB 00:00 2026-04-01T09:51:53.389 INFO:teuthology.orchestra.run.vm07.stdout:(108/152): zip-3.0-35.el9.x86_64.rpm 6.5 MB/s | 263 kB 00:00 2026-04-01T09:51:53.413 INFO:teuthology.orchestra.run.vm07.stdout:(109/152): flexiblas-3.0.4-8.el9.0.1.x86_64.rpm 1.1 MB/s | 30 kB 00:00 2026-04-01T09:51:53.417 INFO:teuthology.orchestra.run.vm03.stdout:(86/152): bzip2-1.0.8-10.el9_5.x86_64.rpm 297 kB/s | 51 kB 00:00 2026-04-01T09:51:53.437 INFO:teuthology.orchestra.run.vm03.stdout:(87/152): c-ares-1.19.1-2.el9_4.x86_64.rpm 686 kB/s | 110 kB 00:00 2026-04-01T09:51:53.440 INFO:teuthology.orchestra.run.vm07.stdout:(110/152): flexiblas-openblas-openmp-3.0.4-8.el 573 kB/s | 15 kB 00:00 2026-04-01T09:51:53.454 INFO:teuthology.orchestra.run.vm00.stdout:(34/152): ceph-mgr-rook-20.2.0-8.g0597158282e.e 481 kB/s | 50 kB 00:00 2026-04-01T09:51:53.465 INFO:teuthology.orchestra.run.vm03.stdout:(88/152): fuse-2.9.9-17.el9.x86_64.rpm 2.8 MB/s | 78 kB 00:00 2026-04-01T09:51:53.469 INFO:teuthology.orchestra.run.vm07.stdout:(111/152): libnbd-1.20.3-4.el9.x86_64.rpm 5.7 MB/s | 171 kB 00:00 2026-04-01T09:51:53.492 INFO:teuthology.orchestra.run.vm03.stdout:(89/152): ledmon-libs-1.1.0-3.el9.x86_64.rpm 1.5 MB/s | 41 kB 00:00 2026-04-01T09:51:53.497 INFO:teuthology.orchestra.run.vm03.stdout:(90/152): cryptsetup-2.7.2-4.el9.x86_64.rpm 3.8 MB/s | 310 kB 00:00 2026-04-01T09:51:53.499 INFO:teuthology.orchestra.run.vm07.stdout:(112/152): libpmemobj-1.12.1-1.el9.x86_64.rpm 5.2 MB/s | 159 kB 00:00 2026-04-01T09:51:53.526 INFO:teuthology.orchestra.run.vm03.stdout:(91/152): libconfig-1.7.2-9.el9.x86_64.rpm 2.0 MB/s | 71 kB 00:00 2026-04-01T09:51:53.528 INFO:teuthology.orchestra.run.vm07.stdout:(113/152): librabbitmq-0.11.0-7.el9.x86_64.rpm 1.5 MB/s | 44 kB 00:00 2026-04-01T09:51:53.564 INFO:teuthology.orchestra.run.vm03.stdout:(92/152): libgfortran-11.5.0-11.el9.x86_64.rpm 12 MB/s | 794 kB 00:00 2026-04-01T09:51:53.582 INFO:teuthology.orchestra.run.vm03.stdout:(93/152): libquadmath-11.5.0-11.el9.x86_64.rpm 3.3 MB/s | 184 kB 00:00 2026-04-01T09:51:53.591 INFO:teuthology.orchestra.run.vm03.stdout:(94/152): lmdb-libs-0.9.29-3.el9.x86_64.rpm 2.2 MB/s | 60 kB 00:00 2026-04-01T09:51:53.608 INFO:teuthology.orchestra.run.vm03.stdout:(95/152): mailcap-2.1.49-5.el9.0.2.noarch.rpm 1.2 MB/s | 32 kB 00:00 2026-04-01T09:51:53.616 INFO:teuthology.orchestra.run.vm07.stdout:(114/152): librdkafka-1.6.1-102.el9.x86_64.rpm 7.4 MB/s | 662 kB 00:00 2026-04-01T09:51:53.633 INFO:teuthology.orchestra.run.vm07.stdout:(115/152): flexiblas-netlib-3.0.4-8.el9.0.1.x86 12 MB/s | 3.0 MB 00:00 2026-04-01T09:51:53.633 INFO:teuthology.orchestra.run.vm03.stdout:(96/152): nvme-cli-2.13-1.el9.x86_64.rpm 24 MB/s | 1.0 MB 00:00 2026-04-01T09:51:53.636 INFO:teuthology.orchestra.run.vm03.stdout:(97/152): pciutils-3.7.0-7.el9.x86_64.rpm 3.3 MB/s | 92 kB 00:00 2026-04-01T09:51:53.666 INFO:teuthology.orchestra.run.vm07.stdout:(116/152): libxslt-1.1.34-13.el9_6.x86_64.rpm 7.1 MB/s | 239 kB 00:00 2026-04-01T09:51:53.668 INFO:teuthology.orchestra.run.vm03.stdout:(98/152): python3-cffi-1.14.5-5.el9.x86_64.rpm 6.8 MB/s | 241 kB 00:00 2026-04-01T09:51:53.672 INFO:teuthology.orchestra.run.vm07.stdout:(117/152): libstoragemgmt-1.10.1-1.el9.x86_64.r 4.3 MB/s | 243 kB 00:00 2026-04-01T09:51:53.699 INFO:teuthology.orchestra.run.vm03.stdout:(99/152): python3-ply-3.11-14.el9.0.1.noarch.rp 3.2 MB/s | 103 kB 00:00 2026-04-01T09:51:53.702 INFO:teuthology.orchestra.run.vm07.stdout:(118/152): lttng-ust-2.12.0-6.el9.x86_64.rpm 7.9 MB/s | 282 kB 00:00 2026-04-01T09:51:53.704 INFO:teuthology.orchestra.run.vm07.stdout:(119/152): lua-5.4.4-4.el9.x86_64.rpm 5.7 MB/s | 187 kB 00:00 2026-04-01T09:51:53.720 INFO:teuthology.orchestra.run.vm03.stdout:(100/152): python3-cryptography-36.0.1-5.el9_6. 14 MB/s | 1.2 MB 00:00 2026-04-01T09:51:53.727 INFO:teuthology.orchestra.run.vm03.stdout:(101/152): python3-pycparser-2.20-6.el9.noarch. 4.4 MB/s | 124 kB 00:00 2026-04-01T09:51:53.728 INFO:teuthology.orchestra.run.vm07.stdout:(120/152): openblas-0.3.29-1.el9.x86_64.rpm 1.5 MB/s | 41 kB 00:00 2026-04-01T09:51:53.750 INFO:teuthology.orchestra.run.vm03.stdout:(102/152): python3-pyparsing-2.4.7-9.el9.0.1.no 5.0 MB/s | 150 kB 00:00 2026-04-01T09:51:53.756 INFO:teuthology.orchestra.run.vm07.stdout:(121/152): perl-Benchmark-1.23-481.1.el9_6.noar 947 kB/s | 25 kB 00:00 2026-04-01T09:51:53.756 INFO:teuthology.orchestra.run.vm03.stdout:(103/152): python3-requests-2.25.1-10.el9_6.noa 4.0 MB/s | 115 kB 00:00 2026-04-01T09:51:53.780 INFO:teuthology.orchestra.run.vm03.stdout:(104/152): python3-urllib3-1.26.5-6.el9_7.1.noa 6.3 MB/s | 191 kB 00:00 2026-04-01T09:51:53.791 INFO:teuthology.orchestra.run.vm07.stdout:(122/152): perl-Test-Harness-3.42-461.el9.noarc 7.4 MB/s | 267 kB 00:00 2026-04-01T09:51:53.797 INFO:teuthology.orchestra.run.vm03.stdout:(105/152): smartmontools-7.2-9.el9.x86_64.rpm 13 MB/s | 551 kB 00:00 2026-04-01T09:51:53.809 INFO:teuthology.orchestra.run.vm03.stdout:(106/152): unzip-6.0-59.el9.x86_64.rpm 6.0 MB/s | 180 kB 00:00 2026-04-01T09:51:53.827 INFO:teuthology.orchestra.run.vm03.stdout:(107/152): zip-3.0-35.el9.x86_64.rpm 8.4 MB/s | 263 kB 00:00 2026-04-01T09:51:53.838 INFO:teuthology.orchestra.run.vm03.stdout:(108/152): boost-program-options-1.75.0-13.el9_ 3.6 MB/s | 104 kB 00:00 2026-04-01T09:51:53.854 INFO:teuthology.orchestra.run.vm03.stdout:(109/152): flexiblas-3.0.4-8.el9.0.1.x86_64.rpm 1.1 MB/s | 30 kB 00:00 2026-04-01T09:51:53.873 INFO:teuthology.orchestra.run.vm07.stdout:(123/152): protobuf-3.14.0-17.el9_7.x86_64.rpm 12 MB/s | 1.0 MB 00:00 2026-04-01T09:51:53.882 INFO:teuthology.orchestra.run.vm03.stdout:(110/152): flexiblas-openblas-openmp-3.0.4-8.el 544 kB/s | 15 kB 00:00 2026-04-01T09:51:53.899 INFO:teuthology.orchestra.run.vm00.stdout:(35/152): ceph-prometheus-alerts-20.2.0-8.g0597 38 kB/s | 17 kB 00:00 2026-04-01T09:51:53.917 INFO:teuthology.orchestra.run.vm03.stdout:(111/152): libnbd-1.20.3-4.el9.x86_64.rpm 4.9 MB/s | 171 kB 00:00 2026-04-01T09:51:53.941 INFO:teuthology.orchestra.run.vm00.stdout:(36/152): ceph-volume-20.2.0-8.g0597158282e.el9 6.9 MB/s | 297 kB 00:00 2026-04-01T09:51:53.952 INFO:teuthology.orchestra.run.vm03.stdout:(112/152): flexiblas-netlib-3.0.4-8.el9.0.1.x86 26 MB/s | 3.0 MB 00:00 2026-04-01T09:51:53.962 INFO:teuthology.orchestra.run.vm03.stdout:(113/152): libpmemobj-1.12.1-1.el9.x86_64.rpm 3.5 MB/s | 159 kB 00:00 2026-04-01T09:51:53.979 INFO:teuthology.orchestra.run.vm03.stdout:(114/152): librabbitmq-0.11.0-7.el9.x86_64.rpm 1.6 MB/s | 44 kB 00:00 2026-04-01T09:51:54.013 INFO:teuthology.orchestra.run.vm03.stdout:(115/152): libstoragemgmt-1.10.1-1.el9.x86_64.r 7.1 MB/s | 243 kB 00:00 2026-04-01T09:51:54.041 INFO:teuthology.orchestra.run.vm00.stdout:(37/152): cephadm-20.2.0-8.g0597158282e.el9.cly 9.8 MB/s | 1.0 MB 00:00 2026-04-01T09:51:54.052 INFO:teuthology.orchestra.run.vm03.stdout:(116/152): libxslt-1.1.34-13.el9_6.x86_64.rpm 6.0 MB/s | 239 kB 00:00 2026-04-01T09:51:54.061 INFO:teuthology.orchestra.run.vm00.stdout:(38/152): abseil-cpp-20211102.0-4.el9.x86_64.rp 28 MB/s | 551 kB 00:00 2026-04-01T09:51:54.068 INFO:teuthology.orchestra.run.vm00.stdout:(39/152): gperftools-libs-2.9.1-3.el9.x86_64.rp 45 MB/s | 308 kB 00:00 2026-04-01T09:51:54.070 INFO:teuthology.orchestra.run.vm00.stdout:(40/152): grpc-data-1.46.7-10.el9.noarch.rpm 8.6 MB/s | 19 kB 00:00 2026-04-01T09:51:54.086 INFO:teuthology.orchestra.run.vm03.stdout:(117/152): lttng-ust-2.12.0-6.el9.x86_64.rpm 8.2 MB/s | 282 kB 00:00 2026-04-01T09:51:54.120 INFO:teuthology.orchestra.run.vm03.stdout:(118/152): lua-5.4.4-4.el9.x86_64.rpm 5.5 MB/s | 187 kB 00:00 2026-04-01T09:51:54.149 INFO:teuthology.orchestra.run.vm03.stdout:(119/152): openblas-0.3.29-1.el9.x86_64.rpm 1.4 MB/s | 41 kB 00:00 2026-04-01T09:51:54.152 INFO:teuthology.orchestra.run.vm03.stdout:(120/152): librdkafka-1.6.1-102.el9.x86_64.rpm 3.4 MB/s | 662 kB 00:00 2026-04-01T09:51:54.160 INFO:teuthology.orchestra.run.vm00.stdout:(41/152): libarrow-9.0.0-15.el9.x86_64.rpm 49 MB/s | 4.4 MB 00:00 2026-04-01T09:51:54.163 INFO:teuthology.orchestra.run.vm00.stdout:(42/152): libarrow-doc-9.0.0-15.el9.noarch.rpm 10 MB/s | 25 kB 00:00 2026-04-01T09:51:54.166 INFO:teuthology.orchestra.run.vm00.stdout:(43/152): liboath-2.6.12-1.el9.x86_64.rpm 17 MB/s | 49 kB 00:00 2026-04-01T09:51:54.170 INFO:teuthology.orchestra.run.vm00.stdout:(44/152): libunwind-1.6.2-1.el9.x86_64.rpm 21 MB/s | 67 kB 00:00 2026-04-01T09:51:54.174 INFO:teuthology.orchestra.run.vm00.stdout:(45/152): luarocks-3.9.2-5.el9.noarch.rpm 36 MB/s | 151 kB 00:00 2026-04-01T09:51:54.183 INFO:teuthology.orchestra.run.vm03.stdout:(121/152): perl-Benchmark-1.23-481.1.el9_6.noar 824 kB/s | 25 kB 00:00 2026-04-01T09:51:54.203 INFO:teuthology.orchestra.run.vm00.stdout:(46/152): parquet-libs-9.0.0-15.el9.x86_64.rpm 29 MB/s | 838 kB 00:00 2026-04-01T09:51:54.216 INFO:teuthology.orchestra.run.vm00.stdout:(47/152): python3-asyncssh-2.13.2-5.el9.noarch. 44 MB/s | 548 kB 00:00 2026-04-01T09:51:54.219 INFO:teuthology.orchestra.run.vm03.stdout:(122/152): perl-Test-Harness-3.42-461.el9.noarc 7.4 MB/s | 267 kB 00:00 2026-04-01T09:51:54.223 INFO:teuthology.orchestra.run.vm00.stdout:(48/152): python3-autocommand-2.2.2-8.el9.noarc 3.9 MB/s | 29 kB 00:00 2026-04-01T09:51:54.229 INFO:teuthology.orchestra.run.vm00.stdout:(49/152): python3-backports-tarfile-1.2.0-1.el9 11 MB/s | 60 kB 00:00 2026-04-01T09:51:54.233 INFO:teuthology.orchestra.run.vm00.stdout:(50/152): python3-bcrypt-3.2.2-1.el9.x86_64.rpm 11 MB/s | 43 kB 00:00 2026-04-01T09:51:54.236 INFO:teuthology.orchestra.run.vm00.stdout:(51/152): python3-cachetools-4.2.4-1.el9.noarch 10 MB/s | 32 kB 00:00 2026-04-01T09:51:54.239 INFO:teuthology.orchestra.run.vm00.stdout:(52/152): python3-certifi-2023.05.07-4.el9.noar 6.3 MB/s | 14 kB 00:00 2026-04-01T09:51:54.243 INFO:teuthology.orchestra.run.vm00.stdout:(53/152): python3-cheroot-10.0.1-5.el9.noarch.r 39 MB/s | 173 kB 00:00 2026-04-01T09:51:54.252 INFO:teuthology.orchestra.run.vm00.stdout:(54/152): python3-cherrypy-18.10.0-5.el9.noarch 35 MB/s | 290 kB 00:00 2026-04-01T09:51:54.283 INFO:teuthology.orchestra.run.vm00.stdout:(55/152): python3-google-auth-2.45.0-1.el9.noar 8.1 MB/s | 254 kB 00:00 2026-04-01T09:51:54.322 INFO:teuthology.orchestra.run.vm03.stdout:(123/152): protobuf-3.14.0-17.el9_7.x86_64.rpm 9.8 MB/s | 1.0 MB 00:00 2026-04-01T09:51:54.325 INFO:teuthology.orchestra.run.vm00.stdout:(56/152): python3-grpcio-1.46.7-10.el9.x86_64.r 48 MB/s | 2.0 MB 00:00 2026-04-01T09:51:54.330 INFO:teuthology.orchestra.run.vm00.stdout:(57/152): python3-grpcio-tools-1.46.7-10.el9.x8 34 MB/s | 144 kB 00:00 2026-04-01T09:51:54.334 INFO:teuthology.orchestra.run.vm00.stdout:(58/152): python3-influxdb-5.3.1-1.el9.noarch.r 34 MB/s | 139 kB 00:00 2026-04-01T09:51:54.337 INFO:teuthology.orchestra.run.vm00.stdout:(59/152): python3-isodate-0.6.1-3.el9.noarch.rp 18 MB/s | 56 kB 00:00 2026-04-01T09:51:54.340 INFO:teuthology.orchestra.run.vm00.stdout:(60/152): python3-jaraco-8.2.1-3.el9.noarch.rpm 4.6 MB/s | 11 kB 00:00 2026-04-01T09:51:54.342 INFO:teuthology.orchestra.run.vm00.stdout:(61/152): python3-jaraco-classes-3.2.1-5.el9.no 7.6 MB/s | 18 kB 00:00 2026-04-01T09:51:54.345 INFO:teuthology.orchestra.run.vm00.stdout:(62/152): python3-jaraco-collections-3.0.0-8.el 7.6 MB/s | 23 kB 00:00 2026-04-01T09:51:54.353 INFO:teuthology.orchestra.run.vm00.stdout:(63/152): python3-jaraco-context-6.0.1-3.el9.no 2.6 MB/s | 20 kB 00:00 2026-04-01T09:51:54.354 INFO:teuthology.orchestra.run.vm07.stdout:(124/152): python3-babel-2.9.1-2.el9.noarch.rpm 12 MB/s | 5.8 MB 00:00 2026-04-01T09:51:54.359 INFO:teuthology.orchestra.run.vm00.stdout:(64/152): python3-jaraco-functools-3.5.0-2.el9. 3.5 MB/s | 19 kB 00:00 2026-04-01T09:51:54.366 INFO:teuthology.orchestra.run.vm00.stdout:(65/152): python3-jaraco-text-4.0.0-2.el9.noarc 4.1 MB/s | 26 kB 00:00 2026-04-01T09:51:54.390 INFO:teuthology.orchestra.run.vm00.stdout:(66/152): python3-kubernetes-26.1.0-3.el9.noarc 43 MB/s | 1.0 MB 00:00 2026-04-01T09:51:54.399 INFO:teuthology.orchestra.run.vm00.stdout:(67/152): python3-more-itertools-8.12.0-2.el9.n 9.1 MB/s | 79 kB 00:00 2026-04-01T09:51:54.403 INFO:teuthology.orchestra.run.vm00.stdout:(68/152): python3-msgpack-1.0.3-2.el9.x86_64.rp 23 MB/s | 86 kB 00:00 2026-04-01T09:51:54.409 INFO:teuthology.orchestra.run.vm00.stdout:(69/152): python3-natsort-7.1.1-5.el9.noarch.rp 9.9 MB/s | 58 kB 00:00 2026-04-01T09:51:54.411 INFO:teuthology.orchestra.run.vm00.stdout:(70/152): python3-portend-3.1.0-2.el9.noarch.rp 7.1 MB/s | 16 kB 00:00 2026-04-01T09:51:54.416 INFO:teuthology.orchestra.run.vm00.stdout:(71/152): python3-pyOpenSSL-21.0.0-1.el9.noarch 21 MB/s | 90 kB 00:00 2026-04-01T09:51:54.419 INFO:teuthology.orchestra.run.vm00.stdout:(72/152): python3-repoze-lru-0.7-16.el9.noarch. 11 MB/s | 31 kB 00:00 2026-04-01T09:51:54.426 INFO:teuthology.orchestra.run.vm00.stdout:(73/152): python3-routes-2.5.1-5.el9.noarch.rpm 26 MB/s | 188 kB 00:00 2026-04-01T09:51:54.429 INFO:teuthology.orchestra.run.vm00.stdout:(74/152): python3-rsa-4.9-2.el9.noarch.rpm 21 MB/s | 59 kB 00:00 2026-04-01T09:51:54.433 INFO:teuthology.orchestra.run.vm00.stdout:(75/152): python3-saml-1.16.0-1.el9.noarch.rpm 33 MB/s | 125 kB 00:00 2026-04-01T09:51:54.437 INFO:teuthology.orchestra.run.vm00.stdout:(76/152): python3-tempora-5.0.0-2.el9.noarch.rp 11 MB/s | 36 kB 00:00 2026-04-01T09:51:54.446 INFO:teuthology.orchestra.run.vm03.stdout:(124/152): openblas-openmp-0.3.29-1.el9.x86_64. 18 MB/s | 5.3 MB 00:00 2026-04-01T09:51:54.446 INFO:teuthology.orchestra.run.vm00.stdout:(77/152): python3-typing-extensions-4.15.0-1.el 9.0 MB/s | 86 kB 00:00 2026-04-01T09:51:54.451 INFO:teuthology.orchestra.run.vm00.stdout:(78/152): python3-websocket-client-1.2.3-2.el9. 23 MB/s | 90 kB 00:00 2026-04-01T09:51:54.454 INFO:teuthology.orchestra.run.vm00.stdout:(79/152): python3-xmlsec-1.3.13-1.el9.x86_64.rp 16 MB/s | 48 kB 00:00 2026-04-01T09:51:54.456 INFO:teuthology.orchestra.run.vm00.stdout:(80/152): python3-xmltodict-0.12.0-15.el9.noarc 9.7 MB/s | 22 kB 00:00 2026-04-01T09:51:54.459 INFO:teuthology.orchestra.run.vm00.stdout:(81/152): python3-zc-lockfile-2.0-10.el9.noarch 8.4 MB/s | 20 kB 00:00 2026-04-01T09:51:54.464 INFO:teuthology.orchestra.run.vm00.stdout:(82/152): re2-20211101-20.el9.x86_64.rpm 37 MB/s | 191 kB 00:00 2026-04-01T09:51:54.470 INFO:teuthology.orchestra.run.vm00.stdout:(83/152): s3cmd-2.4.0-1.el9.noarch.rpm 39 MB/s | 206 kB 00:00 2026-04-01T09:51:54.481 INFO:teuthology.orchestra.run.vm03.stdout:(125/152): python3-devel-3.9.23-2.el9.x86_64.rp 5.8 MB/s | 205 kB 00:00 2026-04-01T09:51:54.504 INFO:teuthology.orchestra.run.vm00.stdout:(84/152): thrift-0.15.0-4.el9.x86_64.rpm 46 MB/s | 1.6 MB 00:00 2026-04-01T09:51:54.513 INFO:teuthology.orchestra.run.vm03.stdout:(126/152): python3-jinja2-2.11.3-8.el9_5.noarch 7.1 MB/s | 228 kB 00:00 2026-04-01T09:51:54.540 INFO:teuthology.orchestra.run.vm03.stdout:(127/152): python3-jmespath-1.0.1-1.el9_7.noarc 1.6 MB/s | 43 kB 00:00 2026-04-01T09:51:54.556 INFO:teuthology.orchestra.run.vm03.stdout:(128/152): python3-babel-2.9.1-2.el9.noarch.rpm 25 MB/s | 5.8 MB 00:00 2026-04-01T09:51:54.569 INFO:teuthology.orchestra.run.vm03.stdout:(129/152): python3-libstoragemgmt-1.10.1-1.el9. 5.8 MB/s | 166 kB 00:00 2026-04-01T09:51:54.596 INFO:teuthology.orchestra.run.vm03.stdout:(130/152): python3-markupsafe-1.1.1-12.el9.x86_ 1.2 MB/s | 32 kB 00:00 2026-04-01T09:51:54.616 INFO:teuthology.orchestra.run.vm03.stdout:(131/152): python3-lxml-4.6.5-3.el9.x86_64.rpm 20 MB/s | 1.2 MB 00:00 2026-04-01T09:51:54.638 INFO:teuthology.orchestra.run.vm00.stdout:(85/152): bzip2-1.0.8-10.el9_5.x86_64.rpm 386 kB/s | 51 kB 00:00 2026-04-01T09:51:54.648 INFO:teuthology.orchestra.run.vm03.stdout:(132/152): python3-numpy-f2py-1.23.5-2.el9_7.x8 11 MB/s | 368 kB 00:00 2026-04-01T09:51:54.676 INFO:teuthology.orchestra.run.vm03.stdout:(133/152): python3-packaging-20.9-5.el9.noarch. 2.4 MB/s | 69 kB 00:00 2026-04-01T09:51:54.700 INFO:teuthology.orchestra.run.vm07.stdout:(125/152): python3-devel-3.9.23-2.el9.x86_64.rp 590 kB/s | 205 kB 00:00 2026-04-01T09:51:54.708 INFO:teuthology.orchestra.run.vm03.stdout:(134/152): python3-protobuf-3.14.0-17.el9_7.noa 7.3 MB/s | 237 kB 00:00 2026-04-01T09:51:54.731 INFO:teuthology.orchestra.run.vm07.stdout:(126/152): python3-jinja2-2.11.3-8.el9_5.noarch 7.3 MB/s | 228 kB 00:00 2026-04-01T09:51:54.734 INFO:teuthology.orchestra.run.vm00.stdout:(86/152): c-ares-1.19.1-2.el9_4.x86_64.rpm 1.1 MB/s | 110 kB 00:00 2026-04-01T09:51:54.736 INFO:teuthology.orchestra.run.vm03.stdout:(135/152): python3-pyasn1-0.4.8-7.el9_7.noarch. 4.6 MB/s | 132 kB 00:00 2026-04-01T09:51:54.759 INFO:teuthology.orchestra.run.vm07.stdout:(127/152): python3-jmespath-1.0.1-1.el9_7.noarc 1.6 MB/s | 43 kB 00:00 2026-04-01T09:51:54.765 INFO:teuthology.orchestra.run.vm03.stdout:(136/152): python3-pyasn1-modules-0.4.8-7.el9_7 7.1 MB/s | 210 kB 00:00 2026-04-01T09:51:54.789 INFO:teuthology.orchestra.run.vm00.stdout:(87/152): cryptsetup-2.7.2-4.el9.x86_64.rpm 5.6 MB/s | 310 kB 00:00 2026-04-01T09:51:54.789 INFO:teuthology.orchestra.run.vm07.stdout:(128/152): python3-libstoragemgmt-1.10.1-1.el9. 5.5 MB/s | 166 kB 00:00 2026-04-01T09:51:54.792 INFO:teuthology.orchestra.run.vm03.stdout:(137/152): python3-requests-oauthlib-1.3.0-12.e 1.6 MB/s | 43 kB 00:00 2026-04-01T09:51:54.816 INFO:teuthology.orchestra.run.vm00.stdout:(88/152): fuse-2.9.9-17.el9.x86_64.rpm 2.8 MB/s | 78 kB 00:00 2026-04-01T09:51:54.856 INFO:teuthology.orchestra.run.vm00.stdout:(89/152): ledmon-libs-1.1.0-3.el9.x86_64.rpm 1.0 MB/s | 41 kB 00:00 2026-04-01T09:51:54.884 INFO:teuthology.orchestra.run.vm00.stdout:(90/152): libconfig-1.7.2-9.el9.x86_64.rpm 2.5 MB/s | 71 kB 00:00 2026-04-01T09:51:54.942 INFO:teuthology.orchestra.run.vm00.stdout:(91/152): libgfortran-11.5.0-11.el9.x86_64.rpm 14 MB/s | 794 kB 00:00 2026-04-01T09:51:54.970 INFO:teuthology.orchestra.run.vm00.stdout:(92/152): libquadmath-11.5.0-11.el9.x86_64.rpm 6.4 MB/s | 184 kB 00:00 2026-04-01T09:51:54.973 INFO:teuthology.orchestra.run.vm07.stdout:(129/152): python3-lxml-4.6.5-3.el9.x86_64.rpm 6.5 MB/s | 1.2 MB 00:00 2026-04-01T09:51:55.000 INFO:teuthology.orchestra.run.vm07.stdout:(130/152): python3-markupsafe-1.1.1-12.el9.x86_ 1.1 MB/s | 32 kB 00:00 2026-04-01T09:51:55.008 INFO:teuthology.orchestra.run.vm00.stdout:(93/152): lmdb-libs-0.9.29-3.el9.x86_64.rpm 1.5 MB/s | 60 kB 00:00 2026-04-01T09:51:55.049 INFO:teuthology.orchestra.run.vm00.stdout:(94/152): mailcap-2.1.49-5.el9.0.2.noarch.rpm 796 kB/s | 32 kB 00:00 2026-04-01T09:51:55.072 INFO:teuthology.orchestra.run.vm03.stdout:(138/152): python3-numpy-1.23.5-2.el9_7.x86_64. 12 MB/s | 5.8 MB 00:00 2026-04-01T09:51:55.095 INFO:teuthology.orchestra.run.vm00.stdout:(95/152): nvme-cli-2.13-1.el9.x86_64.rpm 22 MB/s | 1.0 MB 00:00 2026-04-01T09:51:55.100 INFO:teuthology.orchestra.run.vm03.stdout:(139/152): python3-toml-0.10.2-6.el9.0.1.noarch 1.6 MB/s | 44 kB 00:00 2026-04-01T09:51:55.123 INFO:teuthology.orchestra.run.vm00.stdout:(96/152): pciutils-3.7.0-7.el9.x86_64.rpm 3.2 MB/s | 92 kB 00:00 2026-04-01T09:51:55.132 INFO:teuthology.orchestra.run.vm03.stdout:(140/152): qatlib-24.09.0-1.el9.x86_64.rpm 7.0 MB/s | 221 kB 00:00 2026-04-01T09:51:55.154 INFO:teuthology.orchestra.run.vm00.stdout:(97/152): python3-cffi-1.14.5-5.el9.x86_64.rpm 7.8 MB/s | 241 kB 00:00 2026-04-01T09:51:55.159 INFO:teuthology.orchestra.run.vm03.stdout:(141/152): qatlib-service-24.09.0-1.el9.x86_64. 1.3 MB/s | 36 kB 00:00 2026-04-01T09:51:55.188 INFO:teuthology.orchestra.run.vm03.stdout:(142/152): qatzip-libs-1.3.1-1.el9.x86_64.rpm 2.2 MB/s | 65 kB 00:00 2026-04-01T09:51:55.205 INFO:teuthology.orchestra.run.vm00.stdout:(98/152): python3-cryptography-36.0.1-5.el9_6.x 23 MB/s | 1.2 MB 00:00 2026-04-01T09:51:55.222 INFO:teuthology.orchestra.run.vm03.stdout:(143/152): socat-1.7.4.1-8.el9.x86_64.rpm 8.8 MB/s | 299 kB 00:00 2026-04-01T09:51:55.233 INFO:teuthology.orchestra.run.vm00.stdout:(99/152): python3-ply-3.11-14.el9.0.1.noarch.rp 3.6 MB/s | 103 kB 00:00 2026-04-01T09:51:55.252 INFO:teuthology.orchestra.run.vm03.stdout:(144/152): xmlsec1-1.2.29-13.el9.x86_64.rpm 6.2 MB/s | 188 kB 00:00 2026-04-01T09:51:55.266 INFO:teuthology.orchestra.run.vm00.stdout:(100/152): python3-pycparser-2.20-6.el9.noarch. 3.8 MB/s | 124 kB 00:00 2026-04-01T09:51:55.266 INFO:teuthology.orchestra.run.vm07.stdout:(131/152): openblas-openmp-0.3.29-1.el9.x86_64. 3.4 MB/s | 5.3 MB 00:01 2026-04-01T09:51:55.279 INFO:teuthology.orchestra.run.vm03.stdout:(145/152): xmlsec1-openssl-1.2.29-13.el9.x86_64 3.2 MB/s | 89 kB 00:00 2026-04-01T09:51:55.298 INFO:teuthology.orchestra.run.vm00.stdout:(101/152): python3-pyparsing-2.4.7-9.el9.0.1.no 4.6 MB/s | 150 kB 00:00 2026-04-01T09:51:55.307 INFO:teuthology.orchestra.run.vm03.stdout:(146/152): xmlstarlet-1.6.1-20.el9.x86_64.rpm 2.2 MB/s | 63 kB 00:00 2026-04-01T09:51:55.328 INFO:teuthology.orchestra.run.vm00.stdout:(102/152): python3-requests-2.25.1-10.el9_6.noa 3.8 MB/s | 115 kB 00:00 2026-04-01T09:51:55.334 INFO:teuthology.orchestra.run.vm03.stdout:(147/152): lua-devel-5.4.4-4.el9.x86_64.rpm 800 kB/s | 21 kB 00:00 2026-04-01T09:51:55.356 INFO:teuthology.orchestra.run.vm00.stdout:(103/152): python3-urllib3-1.26.5-6.el9_7.1.noa 6.6 MB/s | 191 kB 00:00 2026-04-01T09:51:55.402 INFO:teuthology.orchestra.run.vm00.stdout:(104/152): smartmontools-7.2-9.el9.x86_64.rpm 12 MB/s | 551 kB 00:00 2026-04-01T09:51:55.426 INFO:teuthology.orchestra.run.vm07.stdout:(132/152): python3-numpy-f2py-1.23.5-2.el9_7.x8 2.3 MB/s | 368 kB 00:00 2026-04-01T09:51:55.432 INFO:teuthology.orchestra.run.vm00.stdout:(105/152): unzip-6.0-59.el9.x86_64.rpm 5.9 MB/s | 180 kB 00:00 2026-04-01T09:51:55.438 INFO:teuthology.orchestra.run.vm03.stdout:(148/152): protobuf-compiler-3.14.0-17.el9_7.x8 8.1 MB/s | 862 kB 00:00 2026-04-01T09:51:55.456 INFO:teuthology.orchestra.run.vm07.stdout:(133/152): python3-packaging-20.9-5.el9.noarch. 2.3 MB/s | 69 kB 00:00 2026-04-01T09:51:55.463 INFO:teuthology.orchestra.run.vm00.stdout:(106/152): zip-3.0-35.el9.x86_64.rpm 8.3 MB/s | 263 kB 00:00 2026-04-01T09:51:55.492 INFO:teuthology.orchestra.run.vm00.stdout:(107/152): boost-program-options-1.75.0-13.el9_ 3.6 MB/s | 104 kB 00:00 2026-04-01T09:51:55.520 INFO:teuthology.orchestra.run.vm00.stdout:(108/152): flexiblas-3.0.4-8.el9.0.1.x86_64.rpm 1.0 MB/s | 30 kB 00:00 2026-04-01T09:51:55.588 INFO:teuthology.orchestra.run.vm03.stdout:(149/152): python3-scipy-1.9.3-2.el9.x86_64.rpm 24 MB/s | 19 MB 00:00 2026-04-01T09:51:55.588 INFO:teuthology.orchestra.run.vm07.stdout:(134/152): python3-protobuf-3.14.0-17.el9_7.noa 1.7 MB/s | 237 kB 00:00 2026-04-01T09:51:55.593 INFO:teuthology.orchestra.run.vm00.stdout:(109/152): flexiblas-netlib-3.0.4-8.el9.0.1.x86 41 MB/s | 3.0 MB 00:00 2026-04-01T09:51:55.620 INFO:teuthology.orchestra.run.vm00.stdout:(110/152): flexiblas-openblas-openmp-3.0.4-8.el 564 kB/s | 15 kB 00:00 2026-04-01T09:51:55.644 INFO:teuthology.orchestra.run.vm07.stdout:(135/152): python3-pyasn1-0.4.8-7.el9_7.noarch. 2.3 MB/s | 132 kB 00:00 2026-04-01T09:51:55.649 INFO:teuthology.orchestra.run.vm00.stdout:(111/152): libnbd-1.20.3-4.el9.x86_64.rpm 6.0 MB/s | 171 kB 00:00 2026-04-01T09:51:55.677 INFO:teuthology.orchestra.run.vm00.stdout:(112/152): libpmemobj-1.12.1-1.el9.x86_64.rpm 5.5 MB/s | 159 kB 00:00 2026-04-01T09:51:55.704 INFO:teuthology.orchestra.run.vm00.stdout:(113/152): librabbitmq-0.11.0-7.el9.x86_64.rpm 1.6 MB/s | 44 kB 00:00 2026-04-01T09:51:55.736 INFO:teuthology.orchestra.run.vm07.stdout:(136/152): python3-pyasn1-modules-0.4.8-7.el9_7 2.2 MB/s | 210 kB 00:00 2026-04-01T09:51:55.746 INFO:teuthology.orchestra.run.vm00.stdout:(114/152): librdkafka-1.6.1-102.el9.x86_64.rpm 16 MB/s | 662 kB 00:00 2026-04-01T09:51:55.765 INFO:teuthology.orchestra.run.vm07.stdout:(137/152): python3-requests-oauthlib-1.3.0-12.e 1.5 MB/s | 43 kB 00:00 2026-04-01T09:51:55.775 INFO:teuthology.orchestra.run.vm00.stdout:(115/152): libstoragemgmt-1.10.1-1.el9.x86_64.r 8.1 MB/s | 243 kB 00:00 2026-04-01T09:51:55.804 INFO:teuthology.orchestra.run.vm00.stdout:(116/152): libxslt-1.1.34-13.el9_6.x86_64.rpm 8.1 MB/s | 239 kB 00:00 2026-04-01T09:51:55.834 INFO:teuthology.orchestra.run.vm07.stdout:(138/152): python3-numpy-1.23.5-2.el9_7.x86_64. 6.9 MB/s | 5.8 MB 00:00 2026-04-01T09:51:55.835 INFO:teuthology.orchestra.run.vm00.stdout:(117/152): lttng-ust-2.12.0-6.el9.x86_64.rpm 9.2 MB/s | 282 kB 00:00 2026-04-01T09:51:55.873 INFO:teuthology.orchestra.run.vm00.stdout:(118/152): lua-5.4.4-4.el9.x86_64.rpm 4.7 MB/s | 187 kB 00:00 2026-04-01T09:51:55.874 INFO:teuthology.orchestra.run.vm03.stdout:(150/152): librados2-20.2.0-8.g0597158282e.el9. 8.1 MB/s | 3.5 MB 00:00 2026-04-01T09:51:55.895 INFO:teuthology.orchestra.run.vm07.stdout:(139/152): python3-toml-0.10.2-6.el9.0.1.noarch 726 kB/s | 44 kB 00:00 2026-04-01T09:51:55.900 INFO:teuthology.orchestra.run.vm00.stdout:(119/152): openblas-0.3.29-1.el9.x86_64.rpm 1.5 MB/s | 41 kB 00:00 2026-04-01T09:51:55.951 INFO:teuthology.orchestra.run.vm07.stdout:(140/152): qatlib-24.09.0-1.el9.x86_64.rpm 3.9 MB/s | 221 kB 00:00 2026-04-01T09:51:55.978 INFO:teuthology.orchestra.run.vm07.stdout:(141/152): qatlib-service-24.09.0-1.el9.x86_64. 1.3 MB/s | 36 kB 00:00 2026-04-01T09:51:56.006 INFO:teuthology.orchestra.run.vm07.stdout:(142/152): qatzip-libs-1.3.1-1.el9.x86_64.rpm 2.3 MB/s | 65 kB 00:00 2026-04-01T09:51:56.006 INFO:teuthology.orchestra.run.vm00.stdout:(120/152): openblas-openmp-0.3.29-1.el9.x86_64. 50 MB/s | 5.3 MB 00:00 2026-04-01T09:51:56.034 INFO:teuthology.orchestra.run.vm00.stdout:(121/152): perl-Benchmark-1.23-481.1.el9_6.noar 943 kB/s | 25 kB 00:00 2026-04-01T09:51:56.060 INFO:teuthology.orchestra.run.vm07.stdout:(143/152): socat-1.7.4.1-8.el9.x86_64.rpm 5.5 MB/s | 299 kB 00:00 2026-04-01T09:51:56.063 INFO:teuthology.orchestra.run.vm00.stdout:(122/152): perl-Test-Harness-3.42-461.el9.noarc 8.9 MB/s | 267 kB 00:00 2026-04-01T09:51:56.089 INFO:teuthology.orchestra.run.vm07.stdout:(144/152): xmlsec1-1.2.29-13.el9.x86_64.rpm 6.3 MB/s | 188 kB 00:00 2026-04-01T09:51:56.103 INFO:teuthology.orchestra.run.vm00.stdout:(123/152): protobuf-3.14.0-17.el9_7.x86_64.rpm 26 MB/s | 1.0 MB 00:00 2026-04-01T09:51:56.117 INFO:teuthology.orchestra.run.vm07.stdout:(145/152): xmlsec1-openssl-1.2.29-13.el9.x86_64 3.2 MB/s | 89 kB 00:00 2026-04-01T09:51:56.149 INFO:teuthology.orchestra.run.vm07.stdout:(146/152): xmlstarlet-1.6.1-20.el9.x86_64.rpm 1.9 MB/s | 63 kB 00:00 2026-04-01T09:51:56.180 INFO:teuthology.orchestra.run.vm07.stdout:(147/152): lua-devel-5.4.4-4.el9.x86_64.rpm 707 kB/s | 21 kB 00:00 2026-04-01T09:51:56.213 INFO:teuthology.orchestra.run.vm00.stdout:(124/152): python3-babel-2.9.1-2.el9.noarch.rpm 53 MB/s | 5.8 MB 00:00 2026-04-01T09:51:56.245 INFO:teuthology.orchestra.run.vm00.stdout:(125/152): python3-devel-3.9.23-2.el9.x86_64.rp 6.4 MB/s | 205 kB 00:00 2026-04-01T09:51:56.305 INFO:teuthology.orchestra.run.vm00.stdout:(126/152): ceph-mgr-dashboard-20.2.0-8.g0597158 3.8 MB/s | 15 MB 00:03 2026-04-01T09:51:56.307 INFO:teuthology.orchestra.run.vm00.stdout:(127/152): python3-jinja2-2.11.3-8.el9_5.noarch 3.6 MB/s | 228 kB 00:00 2026-04-01T09:51:56.308 INFO:teuthology.orchestra.run.vm07.stdout:(148/152): protobuf-compiler-3.14.0-17.el9_7.x8 6.6 MB/s | 862 kB 00:00 2026-04-01T09:51:56.337 INFO:teuthology.orchestra.run.vm00.stdout:(128/152): python3-libstoragemgmt-1.10.1-1.el9. 5.5 MB/s | 166 kB 00:00 2026-04-01T09:51:56.379 INFO:teuthology.orchestra.run.vm00.stdout:(129/152): python3-lxml-4.6.5-3.el9.x86_64.rpm 28 MB/s | 1.2 MB 00:00 2026-04-01T09:51:56.406 INFO:teuthology.orchestra.run.vm00.stdout:(130/152): python3-markupsafe-1.1.1-12.el9.x86_ 1.2 MB/s | 32 kB 00:00 2026-04-01T09:51:56.439 INFO:teuthology.orchestra.run.vm00.stdout:(131/152): python3-jmespath-1.0.1-1.el9_7.noarc 324 kB/s | 43 kB 00:00 2026-04-01T09:51:56.547 INFO:teuthology.orchestra.run.vm00.stdout:(132/152): python3-numpy-1.23.5-2.el9_7.x86_64. 41 MB/s | 5.8 MB 00:00 2026-04-01T09:51:56.553 INFO:teuthology.orchestra.run.vm00.stdout:(133/152): python3-numpy-f2py-1.23.5-2.el9_7.x8 3.2 MB/s | 368 kB 00:00 2026-04-01T09:51:56.575 INFO:teuthology.orchestra.run.vm00.stdout:(134/152): python3-packaging-20.9-5.el9.noarch. 2.4 MB/s | 69 kB 00:00 2026-04-01T09:51:56.603 INFO:teuthology.orchestra.run.vm00.stdout:(135/152): python3-pyasn1-0.4.8-7.el9_7.noarch. 4.6 MB/s | 132 kB 00:00 2026-04-01T09:51:56.606 INFO:teuthology.orchestra.run.vm00.stdout:(136/152): python3-protobuf-3.14.0-17.el9_7.noa 4.3 MB/s | 237 kB 00:00 2026-04-01T09:51:56.631 INFO:teuthology.orchestra.run.vm00.stdout:(137/152): python3-pyasn1-modules-0.4.8-7.el9_7 7.2 MB/s | 210 kB 00:00 2026-04-01T09:51:56.633 INFO:teuthology.orchestra.run.vm00.stdout:(138/152): python3-requests-oauthlib-1.3.0-12.e 1.6 MB/s | 43 kB 00:00 2026-04-01T09:51:56.660 INFO:teuthology.orchestra.run.vm00.stdout:(139/152): python3-toml-0.10.2-6.el9.0.1.noarch 1.6 MB/s | 44 kB 00:00 2026-04-01T09:51:56.714 INFO:teuthology.orchestra.run.vm00.stdout:(140/152): qatlib-24.09.0-1.el9.x86_64.rpm 4.0 MB/s | 221 kB 00:00 2026-04-01T09:51:56.742 INFO:teuthology.orchestra.run.vm00.stdout:(141/152): qatlib-service-24.09.0-1.el9.x86_64. 1.3 MB/s | 36 kB 00:00 2026-04-01T09:51:56.770 INFO:teuthology.orchestra.run.vm00.stdout:(142/152): qatzip-libs-1.3.1-1.el9.x86_64.rpm 2.3 MB/s | 65 kB 00:00 2026-04-01T09:51:56.838 INFO:teuthology.orchestra.run.vm00.stdout:(143/152): socat-1.7.4.1-8.el9.x86_64.rpm 4.4 MB/s | 299 kB 00:00 2026-04-01T09:51:56.870 INFO:teuthology.orchestra.run.vm00.stdout:(144/152): xmlsec1-1.2.29-13.el9.x86_64.rpm 5.8 MB/s | 188 kB 00:00 2026-04-01T09:51:56.899 INFO:teuthology.orchestra.run.vm00.stdout:(145/152): xmlsec1-openssl-1.2.29-13.el9.x86_64 3.1 MB/s | 89 kB 00:00 2026-04-01T09:51:56.959 INFO:teuthology.orchestra.run.vm00.stdout:(146/152): python3-scipy-1.9.3-2.el9.x86_64.rpm 57 MB/s | 19 MB 00:00 2026-04-01T09:51:56.960 INFO:teuthology.orchestra.run.vm00.stdout:(147/152): xmlstarlet-1.6.1-20.el9.x86_64.rpm 1.0 MB/s | 63 kB 00:00 2026-04-01T09:51:56.986 INFO:teuthology.orchestra.run.vm00.stdout:(148/152): lua-devel-5.4.4-4.el9.x86_64.rpm 810 kB/s | 21 kB 00:00 2026-04-01T09:51:57.089 INFO:teuthology.orchestra.run.vm00.stdout:(149/152): protobuf-compiler-3.14.0-17.el9_7.x8 6.5 MB/s | 862 kB 00:00 2026-04-01T09:51:57.359 INFO:teuthology.orchestra.run.vm00.stdout:(150/152): librados2-20.2.0-8.g0597158282e.el9. 9.4 MB/s | 3.5 MB 00:00 2026-04-01T09:51:59.635 INFO:teuthology.orchestra.run.vm07.stdout:(149/152): python3-scipy-1.9.3-2.el9.x86_64.rpm 4.8 MB/s | 19 MB 00:03 2026-04-01T09:52:00.064 INFO:teuthology.orchestra.run.vm07.stdout:(150/152): librbd1-20.2.0-8.g0597158282e.el9.cl 6.6 MB/s | 2.8 MB 00:00 2026-04-01T09:52:00.639 INFO:teuthology.orchestra.run.vm03.stdout:(151/152): ceph-test-20.2.0-8.g0597158282e.el9. 7.1 MB/s | 85 MB 00:11 2026-04-01T09:52:01.283 INFO:teuthology.orchestra.run.vm03.stdout:(152/152): librbd1-20.2.0-8.g0597158282e.el9.cl 512 kB/s | 2.8 MB 00:05 2026-04-01T09:52:01.287 INFO:teuthology.orchestra.run.vm03.stdout:-------------------------------------------------------------------------------- 2026-04-01T09:52:01.287 INFO:teuthology.orchestra.run.vm03.stdout:Total 18 MB/s | 274 MB 00:15 2026-04-01T09:52:01.689 INFO:teuthology.orchestra.run.vm07.stdout:(151/152): librados2-20.2.0-8.g0597158282e.el9. 669 kB/s | 3.5 MB 00:05 2026-04-01T09:52:02.023 INFO:teuthology.orchestra.run.vm00.stdout:(151/152): librbd1-20.2.0-8.g0597158282e.el9.cl 590 kB/s | 2.8 MB 00:04 2026-04-01T09:52:02.140 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-04-01T09:52:02.202 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-04-01T09:52:02.202 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-04-01T09:52:03.032 INFO:teuthology.orchestra.run.vm07.stdout:(152/152): ceph-test-20.2.0-8.g0597158282e.el9. 5.8 MB/s | 85 MB 00:14 2026-04-01T09:52:03.036 INFO:teuthology.orchestra.run.vm07.stdout:-------------------------------------------------------------------------------- 2026-04-01T09:52:03.037 INFO:teuthology.orchestra.run.vm07.stdout:Total 15 MB/s | 274 MB 00:17 2026-04-01T09:52:03.322 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-04-01T09:52:03.323 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-04-01T09:52:04.019 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-04-01T09:52:04.091 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-04-01T09:52:04.091 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-04-01T09:52:04.333 INFO:teuthology.orchestra.run.vm00.stdout:(152/152): ceph-test-20.2.0-8.g0597158282e.el9. 5.6 MB/s | 85 MB 00:15 2026-04-01T09:52:04.339 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------------------------------------------------------- 2026-04-01T09:52:04.339 INFO:teuthology.orchestra.run.vm00.stdout:Total 15 MB/s | 274 MB 00:18 2026-04-01T09:52:04.537 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-04-01T09:52:04.546 INFO:teuthology.orchestra.run.vm03.stdout: Installing : thrift-0.15.0-4.el9.x86_64 1/154 2026-04-01T09:52:04.560 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 2/154 2026-04-01T09:52:04.743 INFO:teuthology.orchestra.run.vm03.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/154 2026-04-01T09:52:04.774 INFO:teuthology.orchestra.run.vm03.stdout: Upgrading : librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 4/154 2026-04-01T09:52:04.839 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 4/154 2026-04-01T09:52:04.841 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86 5/154 2026-04-01T09:52:04.862 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86 5/154 2026-04-01T09:52:04.865 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libcephfs-daemon-2:20.2.0-8.g0597158282e.el9.cly 6/154 2026-04-01T09:52:04.867 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.cly 7/154 2026-04-01T09:52:04.904 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.cly 7/154 2026-04-01T09:52:04.912 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-rados-2:20.2.0-8.g0597158282e.el9.clyso. 8/154 2026-04-01T09:52:04.924 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libxslt-1.1.34-13.el9_6.x86_64 9/154 2026-04-01T09:52:04.928 INFO:teuthology.orchestra.run.vm03.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 10/154 2026-04-01T09:52:04.932 INFO:teuthology.orchestra.run.vm03.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 11/154 2026-04-01T09:52:04.937 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 12/154 2026-04-01T09:52:04.953 INFO:teuthology.orchestra.run.vm03.stdout: Installing : lmdb-libs-0.9.29-3.el9.x86_64 13/154 2026-04-01T09:52:04.959 INFO:teuthology.orchestra.run.vm03.stdout: Installing : liboath-2.6.12-1.el9.x86_64 14/154 2026-04-01T09:52:05.116 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 15/154 2026-04-01T09:52:05.119 INFO:teuthology.orchestra.run.vm03.stdout: Upgrading : librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 16/154 2026-04-01T09:52:05.136 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 16/154 2026-04-01T09:52:05.159 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-04-01T09:52:05.187 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 17/154 2026-04-01T09:52:05.196 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-lxml-4.6.5-3.el9.x86_64 18/154 2026-04-01T09:52:05.208 INFO:teuthology.orchestra.run.vm03.stdout: Installing : xmlsec1-1.2.29-13.el9.x86_64 19/154 2026-04-01T09:52:05.209 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso. 20/154 2026-04-01T09:52:05.223 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-04-01T09:52:05.223 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-04-01T09:52:05.235 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-04-01T09:52:05.235 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-04-01T09:52:05.242 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso. 20/154 2026-04-01T09:52:05.244 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libradosstriper1-2:20.2.0-8.g0597158282e.el9.cly 21/154 2026-04-01T09:52:05.295 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libradosstriper1-2:20.2.0-8.g0597158282e.el9.cly 21/154 2026-04-01T09:52:05.308 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pyasn1-0.4.8-7.el9_7.noarch 22/154 2026-04-01T09:52:05.320 INFO:teuthology.orchestra.run.vm03.stdout: Installing : protobuf-3.14.0-17.el9_7.x86_64 23/154 2026-04-01T09:52:05.325 INFO:teuthology.orchestra.run.vm03.stdout: Installing : lua-5.4.4-4.el9.x86_64 24/154 2026-04-01T09:52:05.332 INFO:teuthology.orchestra.run.vm03.stdout: Installing : flexiblas-3.0.4-8.el9.0.1.x86_64 25/154 2026-04-01T09:52:05.373 INFO:teuthology.orchestra.run.vm03.stdout: Installing : unzip-6.0-59.el9.x86_64 26/154 2026-04-01T09:52:05.391 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-urllib3-1.26.5-6.el9_7.1.noarch 27/154 2026-04-01T09:52:05.397 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-requests-2.25.1-10.el9_6.noarch 28/154 2026-04-01T09:52:05.405 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libquadmath-11.5.0-11.el9.x86_64 29/154 2026-04-01T09:52:05.410 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libgfortran-11.5.0-11.el9.x86_64 30/154 2026-04-01T09:52:05.416 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 31/154 2026-04-01T09:52:05.455 INFO:teuthology.orchestra.run.vm03.stdout: Installing : re2-1:20211101-20.el9.x86_64 32/154 2026-04-01T09:52:05.531 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 33/154 2026-04-01T09:52:05.604 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-ceph-common-2:20.2.0-8.g0597158282e.el9. 34/154 2026-04-01T09:52:05.616 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-ceph-argparse-2:20.2.0-8.g0597158282e.el 35/154 2026-04-01T09:52:05.632 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cephfs-2:20.2.0-8.g0597158282e.el9.clyso 36/154 2026-04-01T09:52:05.642 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 37/154 2026-04-01T09:52:05.677 INFO:teuthology.orchestra.run.vm03.stdout: Installing : zip-3.0-35.el9.x86_64 38/154 2026-04-01T09:52:05.683 INFO:teuthology.orchestra.run.vm03.stdout: Installing : luarocks-3.9.2-5.el9.noarch 39/154 2026-04-01T09:52:05.692 INFO:teuthology.orchestra.run.vm03.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 40/154 2026-04-01T09:52:05.708 INFO:teuthology.orchestra.run.vm03.stdout: Installing : protobuf-compiler-3.14.0-17.el9_7.x86_64 41/154 2026-04-01T09:52:05.778 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-rsa-4.9-2.el9.noarch 42/154 2026-04-01T09:52:05.784 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9_7.noarch 43/154 2026-04-01T09:52:05.791 INFO:teuthology.orchestra.run.vm03.stdout: Installing : xmlsec1-openssl-1.2.29-13.el9.x86_64 44/154 2026-04-01T09:52:05.800 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-xmlsec-1.3.13-1.el9.x86_64 45/154 2026-04-01T09:52:05.821 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 46/154 2026-04-01T09:52:05.827 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-rbd-2:20.2.0-8.g0597158282e.el9.clyso.x8 47/154 2026-04-01T09:52:05.840 INFO:teuthology.orchestra.run.vm03.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 48/154 2026-04-01T09:52:05.852 INFO:teuthology.orchestra.run.vm03.stdout: Installing : librados-devel-2:20.2.0-8.g0597158282e.el9.clyso 49/154 2026-04-01T09:52:05.862 INFO:teuthology.orchestra.run.vm03.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 50/154 2026-04-01T09:52:05.869 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-toml-0.10.2-6.el9.0.1.noarch 51/154 2026-04-01T09:52:05.879 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 52/154 2026-04-01T09:52:05.904 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 53/154 2026-04-01T09:52:05.946 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 54/154 2026-04-01T09:52:05.960 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-protobuf-3.14.0-17.el9_7.noarch 55/154 2026-04-01T09:52:05.973 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 56/154 2026-04-01T09:52:06.025 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jmespath-1.0.1-1.el9_7.noarch 57/154 2026-04-01T09:52:06.356 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-devel-3.9.23-2.el9.x86_64 58/154 2026-04-01T09:52:06.396 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 59/154 2026-04-01T09:52:06.402 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jinja2-2.11.3-8.el9_5.noarch 60/154 2026-04-01T09:52:06.407 INFO:teuthology.orchestra.run.vm03.stdout: Installing : perl-Benchmark-1.23-481.1.el9_6.noarch 61/154 2026-04-01T09:52:06.410 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-04-01T09:52:06.410 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-04-01T09:52:06.489 INFO:teuthology.orchestra.run.vm03.stdout: Installing : openblas-0.3.29-1.el9.x86_64 62/154 2026-04-01T09:52:06.494 INFO:teuthology.orchestra.run.vm03.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 63/154 2026-04-01T09:52:06.513 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-04-01T09:52:06.525 INFO:teuthology.orchestra.run.vm03.stdout: Installing : flexiblas-openblas-openmp-3.0.4-8.el9.0.1.x86_64 64/154 2026-04-01T09:52:06.527 INFO:teuthology.orchestra.run.vm07.stdout: Installing : thrift-0.15.0-4.el9.x86_64 1/154 2026-04-01T09:52:06.543 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 2/154 2026-04-01T09:52:06.748 INFO:teuthology.orchestra.run.vm07.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/154 2026-04-01T09:52:06.752 INFO:teuthology.orchestra.run.vm07.stdout: Upgrading : librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 4/154 2026-04-01T09:52:06.818 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 4/154 2026-04-01T09:52:06.821 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86 5/154 2026-04-01T09:52:06.843 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86 5/154 2026-04-01T09:52:06.848 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libcephfs-daemon-2:20.2.0-8.g0597158282e.el9.cly 6/154 2026-04-01T09:52:06.851 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.cly 7/154 2026-04-01T09:52:06.892 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.cly 7/154 2026-04-01T09:52:06.902 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-rados-2:20.2.0-8.g0597158282e.el9.clyso. 8/154 2026-04-01T09:52:06.915 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libxslt-1.1.34-13.el9_6.x86_64 9/154 2026-04-01T09:52:06.918 INFO:teuthology.orchestra.run.vm07.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 10/154 2026-04-01T09:52:06.927 INFO:teuthology.orchestra.run.vm07.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 11/154 2026-04-01T09:52:06.987 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 12/154 2026-04-01T09:52:06.991 INFO:teuthology.orchestra.run.vm07.stdout: Installing : lmdb-libs-0.9.29-3.el9.x86_64 13/154 2026-04-01T09:52:06.997 INFO:teuthology.orchestra.run.vm07.stdout: Installing : liboath-2.6.12-1.el9.x86_64 14/154 2026-04-01T09:52:07.004 INFO:teuthology.orchestra.run.vm03.stdout: Installing : flexiblas-netlib-3.0.4-8.el9.0.1.x86_64 65/154 2026-04-01T09:52:07.132 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-numpy-1:1.23.5-2.el9_7.x86_64 66/154 2026-04-01T09:52:07.159 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 15/154 2026-04-01T09:52:07.162 INFO:teuthology.orchestra.run.vm07.stdout: Upgrading : librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 16/154 2026-04-01T09:52:07.183 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 16/154 2026-04-01T09:52:07.230 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 17/154 2026-04-01T09:52:07.242 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-lxml-4.6.5-3.el9.x86_64 18/154 2026-04-01T09:52:07.254 INFO:teuthology.orchestra.run.vm07.stdout: Installing : xmlsec1-1.2.29-13.el9.x86_64 19/154 2026-04-01T09:52:07.256 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso. 20/154 2026-04-01T09:52:07.286 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso. 20/154 2026-04-01T09:52:07.288 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libradosstriper1-2:20.2.0-8.g0597158282e.el9.cly 21/154 2026-04-01T09:52:07.336 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libradosstriper1-2:20.2.0-8.g0597158282e.el9.cly 21/154 2026-04-01T09:52:07.350 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-pyasn1-0.4.8-7.el9_7.noarch 22/154 2026-04-01T09:52:07.358 INFO:teuthology.orchestra.run.vm07.stdout: Installing : protobuf-3.14.0-17.el9_7.x86_64 23/154 2026-04-01T09:52:07.364 INFO:teuthology.orchestra.run.vm07.stdout: Installing : lua-5.4.4-4.el9.x86_64 24/154 2026-04-01T09:52:07.372 INFO:teuthology.orchestra.run.vm07.stdout: Installing : flexiblas-3.0.4-8.el9.0.1.x86_64 25/154 2026-04-01T09:52:07.417 INFO:teuthology.orchestra.run.vm07.stdout: Installing : unzip-6.0-59.el9.x86_64 26/154 2026-04-01T09:52:07.437 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-urllib3-1.26.5-6.el9_7.1.noarch 27/154 2026-04-01T09:52:07.443 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-requests-2.25.1-10.el9_6.noarch 28/154 2026-04-01T09:52:07.454 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libquadmath-11.5.0-11.el9.x86_64 29/154 2026-04-01T09:52:07.457 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libgfortran-11.5.0-11.el9.x86_64 30/154 2026-04-01T09:52:07.463 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 31/154 2026-04-01T09:52:07.509 INFO:teuthology.orchestra.run.vm07.stdout: Installing : re2-1:20211101-20.el9.x86_64 32/154 2026-04-01T09:52:07.547 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 33/154 2026-04-01T09:52:07.557 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-ceph-common-2:20.2.0-8.g0597158282e.el9. 34/154 2026-04-01T09:52:07.569 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-ceph-argparse-2:20.2.0-8.g0597158282e.el 35/154 2026-04-01T09:52:07.586 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-cephfs-2:20.2.0-8.g0597158282e.el9.clyso 36/154 2026-04-01T09:52:07.596 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 37/154 2026-04-01T09:52:07.631 INFO:teuthology.orchestra.run.vm07.stdout: Installing : zip-3.0-35.el9.x86_64 38/154 2026-04-01T09:52:07.637 INFO:teuthology.orchestra.run.vm07.stdout: Installing : luarocks-3.9.2-5.el9.noarch 39/154 2026-04-01T09:52:07.649 INFO:teuthology.orchestra.run.vm07.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 40/154 2026-04-01T09:52:07.667 INFO:teuthology.orchestra.run.vm07.stdout: Installing : protobuf-compiler-3.14.0-17.el9_7.x86_64 41/154 2026-04-01T09:52:07.678 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-04-01T09:52:07.689 INFO:teuthology.orchestra.run.vm00.stdout: Installing : thrift-0.15.0-4.el9.x86_64 1/154 2026-04-01T09:52:07.702 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 2/154 2026-04-01T09:52:07.740 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-rsa-4.9-2.el9.noarch 42/154 2026-04-01T09:52:07.745 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9_7.noarch 43/154 2026-04-01T09:52:07.752 INFO:teuthology.orchestra.run.vm07.stdout: Installing : xmlsec1-openssl-1.2.29-13.el9.x86_64 44/154 2026-04-01T09:52:07.760 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-xmlsec-1.3.13-1.el9.x86_64 45/154 2026-04-01T09:52:07.784 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 46/154 2026-04-01T09:52:07.791 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-rbd-2:20.2.0-8.g0597158282e.el9.clyso.x8 47/154 2026-04-01T09:52:07.804 INFO:teuthology.orchestra.run.vm07.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 48/154 2026-04-01T09:52:07.842 INFO:teuthology.orchestra.run.vm07.stdout: Installing : librados-devel-2:20.2.0-8.g0597158282e.el9.clyso 49/154 2026-04-01T09:52:07.851 INFO:teuthology.orchestra.run.vm07.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 50/154 2026-04-01T09:52:07.857 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-toml-0.10.2-6.el9.0.1.noarch 51/154 2026-04-01T09:52:07.868 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 52/154 2026-04-01T09:52:07.874 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 53/154 2026-04-01T09:52:07.903 INFO:teuthology.orchestra.run.vm00.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/154 2026-04-01T09:52:07.906 INFO:teuthology.orchestra.run.vm00.stdout: Upgrading : librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 4/154 2026-04-01T09:52:07.918 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 54/154 2026-04-01T09:52:07.930 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-protobuf-3.14.0-17.el9_7.noarch 55/154 2026-04-01T09:52:07.946 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 56/154 2026-04-01T09:52:07.984 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 4/154 2026-04-01T09:52:07.986 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86 5/154 2026-04-01T09:52:08.007 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86 5/154 2026-04-01T09:52:08.012 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libcephfs-daemon-2:20.2.0-8.g0597158282e.el9.cly 6/154 2026-04-01T09:52:08.014 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.cly 7/154 2026-04-01T09:52:08.015 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jmespath-1.0.1-1.el9_7.noarch 57/154 2026-04-01T09:52:08.046 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9_7.x86_64 67/154 2026-04-01T09:52:08.050 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.cly 7/154 2026-04-01T09:52:08.060 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-rados-2:20.2.0-8.g0597158282e.el9.clyso. 8/154 2026-04-01T09:52:08.069 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 68/154 2026-04-01T09:52:08.074 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libxslt-1.1.34-13.el9_6.x86_64 9/154 2026-04-01T09:52:08.085 INFO:teuthology.orchestra.run.vm03.stdout: Installing : boost-program-options-1.75.0-13.el9_7.x86_64 69/154 2026-04-01T09:52:08.112 INFO:teuthology.orchestra.run.vm03.stdout: Installing : smartmontools-1:7.2-9.el9.x86_64 70/154 2026-04-01T09:52:08.130 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: smartmontools-1:7.2-9.el9.x86_64 70/154 2026-04-01T09:52:08.130 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartd.service → /usr/lib/systemd/system/smartd.service. 2026-04-01T09:52:08.130 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:08.139 INFO:teuthology.orchestra.run.vm00.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 10/154 2026-04-01T09:52:08.146 INFO:teuthology.orchestra.run.vm00.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 11/154 2026-04-01T09:52:08.151 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 12/154 2026-04-01T09:52:08.154 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pyparsing-2.4.7-9.el9.0.1.noarch 71/154 2026-04-01T09:52:08.155 INFO:teuthology.orchestra.run.vm00.stdout: Installing : lmdb-libs-0.9.29-3.el9.x86_64 13/154 2026-04-01T09:52:08.164 INFO:teuthology.orchestra.run.vm00.stdout: Installing : liboath-2.6.12-1.el9.x86_64 14/154 2026-04-01T09:52:08.164 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-packaging-20.9-5.el9.noarch 72/154 2026-04-01T09:52:08.182 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-ply-3.11-14.el9.0.1.noarch 73/154 2026-04-01T09:52:08.202 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 74/154 2026-04-01T09:52:08.307 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 75/154 2026-04-01T09:52:08.319 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 15/154 2026-04-01T09:52:08.322 INFO:teuthology.orchestra.run.vm00.stdout: Upgrading : librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 16/154 2026-04-01T09:52:08.322 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cryptography-36.0.1-5.el9_6.x86_64 76/154 2026-04-01T09:52:08.342 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-devel-3.9.23-2.el9.x86_64 58/154 2026-04-01T09:52:08.348 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 16/154 2026-04-01T09:52:08.356 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 77/154 2026-04-01T09:52:08.369 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cheroot-10.0.1-5.el9.noarch 78/154 2026-04-01T09:52:08.377 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 79/154 2026-04-01T09:52:08.380 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 59/154 2026-04-01T09:52:08.381 INFO:teuthology.orchestra.run.vm03.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 80/154 2026-04-01T09:52:08.386 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jinja2-2.11.3-8.el9_5.noarch 60/154 2026-04-01T09:52:08.392 INFO:teuthology.orchestra.run.vm07.stdout: Installing : perl-Benchmark-1.23-481.1.el9_6.noarch 61/154 2026-04-01T09:52:08.399 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 17/154 2026-04-01T09:52:08.410 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-lxml-4.6.5-3.el9.x86_64 18/154 2026-04-01T09:52:08.420 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: qatlib-24.09.0-1.el9.x86_64 81/154 2026-04-01T09:52:08.422 INFO:teuthology.orchestra.run.vm00.stdout: Installing : xmlsec1-1.2.29-13.el9.x86_64 19/154 2026-04-01T09:52:08.424 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso. 20/154 2026-04-01T09:52:08.426 INFO:teuthology.orchestra.run.vm03.stdout: Installing : qatlib-24.09.0-1.el9.x86_64 81/154 2026-04-01T09:52:08.428 INFO:teuthology.orchestra.run.vm03.stdout: Installing : qatlib-service-24.09.0-1.el9.x86_64 82/154 2026-04-01T09:52:08.452 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: qatlib-service-24.09.0-1.el9.x86_64 82/154 2026-04-01T09:52:08.455 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso. 20/154 2026-04-01T09:52:08.459 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libradosstriper1-2:20.2.0-8.g0597158282e.el9.cly 21/154 2026-04-01T09:52:08.466 INFO:teuthology.orchestra.run.vm07.stdout: Installing : openblas-0.3.29-1.el9.x86_64 62/154 2026-04-01T09:52:08.470 INFO:teuthology.orchestra.run.vm07.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 63/154 2026-04-01T09:52:08.505 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libradosstriper1-2:20.2.0-8.g0597158282e.el9.cly 21/154 2026-04-01T09:52:08.505 INFO:teuthology.orchestra.run.vm07.stdout: Installing : flexiblas-openblas-openmp-3.0.4-8.el9.0.1.x86_64 64/154 2026-04-01T09:52:08.518 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyasn1-0.4.8-7.el9_7.noarch 22/154 2026-04-01T09:52:08.526 INFO:teuthology.orchestra.run.vm00.stdout: Installing : protobuf-3.14.0-17.el9_7.x86_64 23/154 2026-04-01T09:52:08.530 INFO:teuthology.orchestra.run.vm00.stdout: Installing : lua-5.4.4-4.el9.x86_64 24/154 2026-04-01T09:52:08.537 INFO:teuthology.orchestra.run.vm00.stdout: Installing : flexiblas-3.0.4-8.el9.0.1.x86_64 25/154 2026-04-01T09:52:08.568 INFO:teuthology.orchestra.run.vm00.stdout: Installing : unzip-6.0-59.el9.x86_64 26/154 2026-04-01T09:52:08.589 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-urllib3-1.26.5-6.el9_7.1.noarch 27/154 2026-04-01T09:52:08.595 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-requests-2.25.1-10.el9_6.noarch 28/154 2026-04-01T09:52:08.605 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libquadmath-11.5.0-11.el9.x86_64 29/154 2026-04-01T09:52:08.609 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libgfortran-11.5.0-11.el9.x86_64 30/154 2026-04-01T09:52:08.613 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 31/154 2026-04-01T09:52:08.620 INFO:teuthology.orchestra.run.vm03.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 83/154 2026-04-01T09:52:08.624 INFO:teuthology.orchestra.run.vm03.stdout: Installing : nvme-cli-2.13-1.el9.x86_64 84/154 2026-04-01T09:52:08.650 INFO:teuthology.orchestra.run.vm00.stdout: Installing : re2-1:20211101-20.el9.x86_64 32/154 2026-04-01T09:52:08.692 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 33/154 2026-04-01T09:52:08.701 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-ceph-common-2:20.2.0-8.g0597158282e.el9. 34/154 2026-04-01T09:52:08.715 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-ceph-argparse-2:20.2.0-8.g0597158282e.el 35/154 2026-04-01T09:52:08.734 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cephfs-2:20.2.0-8.g0597158282e.el9.clyso 36/154 2026-04-01T09:52:08.743 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 37/154 2026-04-01T09:52:08.782 INFO:teuthology.orchestra.run.vm00.stdout: Installing : zip-3.0-35.el9.x86_64 38/154 2026-04-01T09:52:08.789 INFO:teuthology.orchestra.run.vm00.stdout: Installing : luarocks-3.9.2-5.el9.noarch 39/154 2026-04-01T09:52:08.799 INFO:teuthology.orchestra.run.vm00.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 40/154 2026-04-01T09:52:08.818 INFO:teuthology.orchestra.run.vm00.stdout: Installing : protobuf-compiler-3.14.0-17.el9_7.x86_64 41/154 2026-04-01T09:52:08.892 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-rsa-4.9-2.el9.noarch 42/154 2026-04-01T09:52:08.904 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9_7.noarch 43/154 2026-04-01T09:52:08.910 INFO:teuthology.orchestra.run.vm00.stdout: Installing : xmlsec1-openssl-1.2.29-13.el9.x86_64 44/154 2026-04-01T09:52:08.918 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-xmlsec-1.3.13-1.el9.x86_64 45/154 2026-04-01T09:52:08.937 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 46/154 2026-04-01T09:52:08.941 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-rbd-2:20.2.0-8.g0597158282e.el9.clyso.x8 47/154 2026-04-01T09:52:08.956 INFO:teuthology.orchestra.run.vm00.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 48/154 2026-04-01T09:52:08.961 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: nvme-cli-2.13-1.el9.x86_64 84/154 2026-04-01T09:52:08.962 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-04-01T09:52:08.962 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:08.965 INFO:teuthology.orchestra.run.vm07.stdout: Installing : flexiblas-netlib-3.0.4-8.el9.0.1.x86_64 65/154 2026-04-01T09:52:08.971 INFO:teuthology.orchestra.run.vm03.stdout: Installing : mailcap-2.1.49-5.el9.0.2.noarch 85/154 2026-04-01T09:52:08.972 INFO:teuthology.orchestra.run.vm00.stdout: Installing : librados-devel-2:20.2.0-8.g0597158282e.el9.clyso 49/154 2026-04-01T09:52:08.976 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 86/154 2026-04-01T09:52:08.983 INFO:teuthology.orchestra.run.vm00.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 50/154 2026-04-01T09:52:08.989 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-toml-0.10.2-6.el9.0.1.noarch 51/154 2026-04-01T09:52:08.998 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 52/154 2026-04-01T09:52:09.000 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 87/154 2026-04-01T09:52:09.000 INFO:teuthology.orchestra.run.vm03.stdout:Creating group 'libstoragemgmt' with GID 992. 2026-04-01T09:52:09.000 INFO:teuthology.orchestra.run.vm03.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 992 and GID 992. 2026-04-01T09:52:09.000 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:09.006 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 53/154 2026-04-01T09:52:09.020 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 87/154 2026-04-01T09:52:09.042 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 54/154 2026-04-01T09:52:09.053 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-protobuf-3.14.0-17.el9_7.noarch 55/154 2026-04-01T09:52:09.055 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 87/154 2026-04-01T09:52:09.055 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-04-01T09:52:09.055 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:09.065 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 56/154 2026-04-01T09:52:09.077 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-numpy-1:1.23.5-2.el9_7.x86_64 66/154 2026-04-01T09:52:09.086 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 88/154 2026-04-01T09:52:09.114 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jmespath-1.0.1-1.el9_7.noarch 57/154 2026-04-01T09:52:09.116 INFO:teuthology.orchestra.run.vm03.stdout: Installing : fuse-2.9.9-17.el9.x86_64 89/154 2026-04-01T09:52:09.123 INFO:teuthology.orchestra.run.vm03.stdout: Installing : cryptsetup-2.7.2-4.el9.x86_64 90/154 2026-04-01T09:52:09.131 INFO:teuthology.orchestra.run.vm03.stdout: Installing : c-ares-1.19.1-2.el9_4.x86_64 91/154 2026-04-01T09:52:09.136 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 92/154 2026-04-01T09:52:09.158 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 93/154 2026-04-01T09:52:09.165 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 94/154 2026-04-01T09:52:09.236 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 95/154 2026-04-01T09:52:09.248 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 96/154 2026-04-01T09:52:09.265 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 97/154 2026-04-01T09:52:09.279 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 98/154 2026-04-01T09:52:09.290 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 99/154 2026-04-01T09:52:09.319 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-msgpack-1.0.3-2.el9.x86_64 100/154 2026-04-01T09:52:09.337 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-influxdb-5.3.1-1.el9.noarch 101/154 2026-04-01T09:52:09.362 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-isodate-0.6.1-3.el9.noarch 102/154 2026-04-01T09:52:09.371 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-saml-1.16.0-1.el9.noarch 103/154 2026-04-01T09:52:09.385 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 104/154 2026-04-01T09:52:09.425 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-devel-3.9.23-2.el9.x86_64 58/154 2026-04-01T09:52:09.444 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 105/154 2026-04-01T09:52:09.462 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 59/154 2026-04-01T09:52:09.467 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jinja2-2.11.3-8.el9_5.noarch 60/154 2026-04-01T09:52:09.472 INFO:teuthology.orchestra.run.vm00.stdout: Installing : perl-Benchmark-1.23-481.1.el9_6.noarch 61/154 2026-04-01T09:52:09.543 INFO:teuthology.orchestra.run.vm00.stdout: Installing : openblas-0.3.29-1.el9.x86_64 62/154 2026-04-01T09:52:09.546 INFO:teuthology.orchestra.run.vm00.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 63/154 2026-04-01T09:52:09.573 INFO:teuthology.orchestra.run.vm00.stdout: Installing : flexiblas-openblas-openmp-3.0.4-8.el9.0.1.x86_64 64/154 2026-04-01T09:52:09.879 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 106/154 2026-04-01T09:52:09.896 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 107/154 2026-04-01T09:52:09.903 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 108/154 2026-04-01T09:52:09.912 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 109/154 2026-04-01T09:52:09.920 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 110/154 2026-04-01T09:52:09.927 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 111/154 2026-04-01T09:52:09.965 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 112/154 2026-04-01T09:52:09.973 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cherrypy-18.10.0-5.el9.noarch 113/154 2026-04-01T09:52:09.981 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 114/154 2026-04-01T09:52:09.983 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9_7.x86_64 67/154 2026-04-01T09:52:09.985 INFO:teuthology.orchestra.run.vm03.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 115/154 2026-04-01T09:52:09.993 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 116/154 2026-04-01T09:52:10.006 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 68/154 2026-04-01T09:52:10.013 INFO:teuthology.orchestra.run.vm00.stdout: Installing : flexiblas-netlib-3.0.4-8.el9.0.1.x86_64 65/154 2026-04-01T09:52:10.027 INFO:teuthology.orchestra.run.vm07.stdout: Installing : boost-program-options-1.75.0-13.el9_7.x86_64 69/154 2026-04-01T09:52:10.030 INFO:teuthology.orchestra.run.vm07.stdout: Installing : smartmontools-1:7.2-9.el9.x86_64 70/154 2026-04-01T09:52:10.053 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: smartmontools-1:7.2-9.el9.x86_64 70/154 2026-04-01T09:52:10.053 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartd.service → /usr/lib/systemd/system/smartd.service. 2026-04-01T09:52:10.053 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:10.075 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-pyparsing-2.4.7-9.el9.0.1.noarch 71/154 2026-04-01T09:52:10.088 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-packaging-20.9-5.el9.noarch 72/154 2026-04-01T09:52:10.111 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-ply-3.11-14.el9.0.1.noarch 73/154 2026-04-01T09:52:10.114 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-numpy-1:1.23.5-2.el9_7.x86_64 66/154 2026-04-01T09:52:10.135 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 74/154 2026-04-01T09:52:10.251 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 75/154 2026-04-01T09:52:10.275 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-cryptography-36.0.1-5.el9_6.x86_64 76/154 2026-04-01T09:52:10.311 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 77/154 2026-04-01T09:52:10.321 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-cheroot-10.0.1-5.el9.noarch 78/154 2026-04-01T09:52:10.329 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 79/154 2026-04-01T09:52:10.331 INFO:teuthology.orchestra.run.vm07.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 80/154 2026-04-01T09:52:10.351 INFO:teuthology.orchestra.run.vm03.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 117/154 2026-04-01T09:52:10.368 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: qatlib-24.09.0-1.el9.x86_64 81/154 2026-04-01T09:52:10.389 INFO:teuthology.orchestra.run.vm07.stdout: Installing : qatlib-24.09.0-1.el9.x86_64 81/154 2026-04-01T09:52:10.396 INFO:teuthology.orchestra.run.vm07.stdout: Installing : qatlib-service-24.09.0-1.el9.x86_64 82/154 2026-04-01T09:52:10.409 INFO:teuthology.orchestra.run.vm03.stdout: Installing : librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 118/154 2026-04-01T09:52:10.422 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: qatlib-service-24.09.0-1.el9.x86_64 82/154 2026-04-01T09:52:10.438 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 118/154 2026-04-01T09:52:10.440 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-rgw-2:20.2.0-8.g0597158282e.el9.clyso.x8 119/154 2026-04-01T09:52:10.602 INFO:teuthology.orchestra.run.vm07.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 83/154 2026-04-01T09:52:10.606 INFO:teuthology.orchestra.run.vm07.stdout: Installing : nvme-cli-2.13-1.el9.x86_64 84/154 2026-04-01T09:52:10.948 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: nvme-cli-2.13-1.el9.x86_64 84/154 2026-04-01T09:52:10.948 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-04-01T09:52:10.948 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:10.961 INFO:teuthology.orchestra.run.vm07.stdout: Installing : mailcap-2.1.49-5.el9.0.2.noarch 85/154 2026-04-01T09:52:10.965 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 86/154 2026-04-01T09:52:11.003 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 87/154 2026-04-01T09:52:11.004 INFO:teuthology.orchestra.run.vm07.stdout:Creating group 'libstoragemgmt' with GID 992. 2026-04-01T09:52:11.004 INFO:teuthology.orchestra.run.vm07.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 992 and GID 992. 2026-04-01T09:52:11.004 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:11.022 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 87/154 2026-04-01T09:52:11.057 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 87/154 2026-04-01T09:52:11.057 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-04-01T09:52:11.057 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:11.061 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9_7.x86_64 67/154 2026-04-01T09:52:11.089 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 68/154 2026-04-01T09:52:11.091 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 88/154 2026-04-01T09:52:11.108 INFO:teuthology.orchestra.run.vm00.stdout: Installing : boost-program-options-1.75.0-13.el9_7.x86_64 69/154 2026-04-01T09:52:11.113 INFO:teuthology.orchestra.run.vm00.stdout: Installing : smartmontools-1:7.2-9.el9.x86_64 70/154 2026-04-01T09:52:11.124 INFO:teuthology.orchestra.run.vm07.stdout: Installing : fuse-2.9.9-17.el9.x86_64 89/154 2026-04-01T09:52:11.133 INFO:teuthology.orchestra.run.vm07.stdout: Installing : cryptsetup-2.7.2-4.el9.x86_64 90/154 2026-04-01T09:52:11.137 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: smartmontools-1:7.2-9.el9.x86_64 70/154 2026-04-01T09:52:11.137 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartd.service → /usr/lib/systemd/system/smartd.service. 2026-04-01T09:52:11.137 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:11.143 INFO:teuthology.orchestra.run.vm07.stdout: Installing : c-ares-1.19.1-2.el9_4.x86_64 91/154 2026-04-01T09:52:11.149 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 92/154 2026-04-01T09:52:11.161 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyparsing-2.4.7-9.el9.0.1.noarch 71/154 2026-04-01T09:52:11.173 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 93/154 2026-04-01T09:52:11.173 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-packaging-20.9-5.el9.noarch 72/154 2026-04-01T09:52:11.182 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 94/154 2026-04-01T09:52:11.195 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-ply-3.11-14.el9.0.1.noarch 73/154 2026-04-01T09:52:11.218 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 74/154 2026-04-01T09:52:11.266 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 95/154 2026-04-01T09:52:11.279 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 96/154 2026-04-01T09:52:11.297 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 97/154 2026-04-01T09:52:11.313 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 98/154 2026-04-01T09:52:11.327 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 99/154 2026-04-01T09:52:11.340 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 75/154 2026-04-01T09:52:11.357 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cryptography-36.0.1-5.el9_6.x86_64 76/154 2026-04-01T09:52:11.365 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-msgpack-1.0.3-2.el9.x86_64 100/154 2026-04-01T09:52:11.385 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-influxdb-5.3.1-1.el9.noarch 101/154 2026-04-01T09:52:11.393 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 77/154 2026-04-01T09:52:11.403 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cheroot-10.0.1-5.el9.noarch 78/154 2026-04-01T09:52:11.410 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 79/154 2026-04-01T09:52:11.413 INFO:teuthology.orchestra.run.vm00.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 80/154 2026-04-01T09:52:11.413 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-isodate-0.6.1-3.el9.noarch 102/154 2026-04-01T09:52:11.423 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-saml-1.16.0-1.el9.noarch 103/154 2026-04-01T09:52:11.438 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 104/154 2026-04-01T09:52:11.449 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: qatlib-24.09.0-1.el9.x86_64 81/154 2026-04-01T09:52:11.455 INFO:teuthology.orchestra.run.vm00.stdout: Installing : qatlib-24.09.0-1.el9.x86_64 81/154 2026-04-01T09:52:11.457 INFO:teuthology.orchestra.run.vm00.stdout: Installing : qatlib-service-24.09.0-1.el9.x86_64 82/154 2026-04-01T09:52:11.482 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: qatlib-service-24.09.0-1.el9.x86_64 82/154 2026-04-01T09:52:11.500 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 105/154 2026-04-01T09:52:11.642 INFO:teuthology.orchestra.run.vm00.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 83/154 2026-04-01T09:52:11.646 INFO:teuthology.orchestra.run.vm00.stdout: Installing : nvme-cli-2.13-1.el9.x86_64 84/154 2026-04-01T09:52:11.893 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 120/154 2026-04-01T09:52:11.899 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 120/154 2026-04-01T09:52:11.954 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 106/154 2026-04-01T09:52:11.976 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 107/154 2026-04-01T09:52:11.983 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 108/154 2026-04-01T09:52:11.993 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 109/154 2026-04-01T09:52:12.003 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 110/154 2026-04-01T09:52:12.010 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 111/154 2026-04-01T09:52:12.012 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: nvme-cli-2.13-1.el9.x86_64 84/154 2026-04-01T09:52:12.012 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-04-01T09:52:12.012 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:12.021 INFO:teuthology.orchestra.run.vm00.stdout: Installing : mailcap-2.1.49-5.el9.0.2.noarch 85/154 2026-04-01T09:52:12.025 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 86/154 2026-04-01T09:52:12.054 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 87/154 2026-04-01T09:52:12.054 INFO:teuthology.orchestra.run.vm00.stdout:Creating group 'libstoragemgmt' with GID 992. 2026-04-01T09:52:12.054 INFO:teuthology.orchestra.run.vm00.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 992 and GID 992. 2026-04-01T09:52:12.054 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:12.055 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 112/154 2026-04-01T09:52:12.063 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-cherrypy-18.10.0-5.el9.noarch 113/154 2026-04-01T09:52:12.070 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 87/154 2026-04-01T09:52:12.073 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 114/154 2026-04-01T09:52:12.077 INFO:teuthology.orchestra.run.vm07.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 115/154 2026-04-01T09:52:12.087 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 116/154 2026-04-01T09:52:12.105 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 87/154 2026-04-01T09:52:12.105 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-04-01T09:52:12.105 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:12.135 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 88/154 2026-04-01T09:52:12.163 INFO:teuthology.orchestra.run.vm00.stdout: Installing : fuse-2.9.9-17.el9.x86_64 89/154 2026-04-01T09:52:12.170 INFO:teuthology.orchestra.run.vm00.stdout: Installing : cryptsetup-2.7.2-4.el9.x86_64 90/154 2026-04-01T09:52:12.177 INFO:teuthology.orchestra.run.vm00.stdout: Installing : c-ares-1.19.1-2.el9_4.x86_64 91/154 2026-04-01T09:52:12.182 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 92/154 2026-04-01T09:52:12.205 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 93/154 2026-04-01T09:52:12.212 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 94/154 2026-04-01T09:52:12.268 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 120/154 2026-04-01T09:52:12.276 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 121/154 2026-04-01T09:52:12.276 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 95/154 2026-04-01T09:52:12.287 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 96/154 2026-04-01T09:52:12.305 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 97/154 2026-04-01T09:52:12.320 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 98/154 2026-04-01T09:52:12.327 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 121/154 2026-04-01T09:52:12.327 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-04-01T09:52:12.327 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-04-01T09:52:12.327 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:12.329 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 99/154 2026-04-01T09:52:12.333 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x 122/154 2026-04-01T09:52:12.355 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-msgpack-1.0.3-2.el9.x86_64 100/154 2026-04-01T09:52:12.375 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-influxdb-5.3.1-1.el9.noarch 101/154 2026-04-01T09:52:12.405 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-isodate-0.6.1-3.el9.noarch 102/154 2026-04-01T09:52:12.415 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-saml-1.16.0-1.el9.noarch 103/154 2026-04-01T09:52:12.433 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 104/154 2026-04-01T09:52:12.435 INFO:teuthology.orchestra.run.vm07.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 117/154 2026-04-01T09:52:12.466 INFO:teuthology.orchestra.run.vm07.stdout: Installing : librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 118/154 2026-04-01T09:52:12.491 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 118/154 2026-04-01T09:52:12.493 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-rgw-2:20.2.0-8.g0597158282e.el9.clyso.x8 119/154 2026-04-01T09:52:12.506 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 105/154 2026-04-01T09:52:12.937 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 106/154 2026-04-01T09:52:12.955 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 107/154 2026-04-01T09:52:12.962 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 108/154 2026-04-01T09:52:12.971 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 109/154 2026-04-01T09:52:12.981 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 110/154 2026-04-01T09:52:12.988 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 111/154 2026-04-01T09:52:13.029 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 112/154 2026-04-01T09:52:13.039 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cherrypy-18.10.0-5.el9.noarch 113/154 2026-04-01T09:52:13.048 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 114/154 2026-04-01T09:52:13.053 INFO:teuthology.orchestra.run.vm00.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 115/154 2026-04-01T09:52:13.062 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 116/154 2026-04-01T09:52:13.424 INFO:teuthology.orchestra.run.vm00.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 117/154 2026-04-01T09:52:13.427 INFO:teuthology.orchestra.run.vm00.stdout: Installing : librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 118/154 2026-04-01T09:52:13.454 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 118/154 2026-04-01T09:52:13.493 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-rgw-2:20.2.0-8.g0597158282e.el9.clyso.x8 119/154 2026-04-01T09:52:13.872 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 120/154 2026-04-01T09:52:13.878 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 120/154 2026-04-01T09:52:14.265 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 120/154 2026-04-01T09:52:14.273 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 121/154 2026-04-01T09:52:14.322 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 121/154 2026-04-01T09:52:14.322 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-04-01T09:52:14.322 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-04-01T09:52:14.322 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:14.329 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x 122/154 2026-04-01T09:52:14.930 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 120/154 2026-04-01T09:52:14.936 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 120/154 2026-04-01T09:52:15.322 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 120/154 2026-04-01T09:52:15.329 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 121/154 2026-04-01T09:52:15.382 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 121/154 2026-04-01T09:52:15.382 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-04-01T09:52:15.382 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-04-01T09:52:15.382 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:15.389 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x 122/154 2026-04-01T09:52:19.689 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x 122/154 2026-04-01T09:52:19.689 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /sys 2026-04-01T09:52:19.689 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /proc 2026-04-01T09:52:19.689 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /mnt 2026-04-01T09:52:19.689 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /var/tmp 2026-04-01T09:52:19.689 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /home 2026-04-01T09:52:19.689 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /root 2026-04-01T09:52:19.689 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /tmp 2026-04-01T09:52:19.689 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:19.814 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 123/154 2026-04-01T09:52:19.835 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 123/154 2026-04-01T09:52:19.835 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:19.835 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-04-01T09:52:19.835 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:20.904 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 124/154 2026-04-01T09:52:20.939 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 124/154 2026-04-01T09:52:20.939 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:20.939 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-04-01T09:52:20.939 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-04-01T09:52:20.939 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-04-01T09:52:20.939 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:21.093 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 125/154 2026-04-01T09:52:21.124 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 125/154 2026-04-01T09:52:21.124 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:21.124 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-04-01T09:52:21.124 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-04-01T09:52:21.124 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-04-01T09:52:21.124 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:21.445 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 126/154 2026-04-01T09:52:21.477 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 126/154 2026-04-01T09:52:21.477 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:21.477 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-04-01T09:52:21.477 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-04-01T09:52:21.477 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-04-01T09:52:21.477 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:21.519 INFO:teuthology.orchestra.run.vm03.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 127/154 2026-04-01T09:52:21.522 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x 122/154 2026-04-01T09:52:21.523 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /sys 2026-04-01T09:52:21.523 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /proc 2026-04-01T09:52:21.523 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /mnt 2026-04-01T09:52:21.523 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /var/tmp 2026-04-01T09:52:21.523 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /home 2026-04-01T09:52:21.523 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /root 2026-04-01T09:52:21.523 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /tmp 2026-04-01T09:52:21.523 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:21.595 INFO:teuthology.orchestra.run.vm03.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 128/154 2026-04-01T09:52:21.841 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 129/154 2026-04-01T09:52:21.858 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 123/154 2026-04-01T09:52:21.906 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 123/154 2026-04-01T09:52:21.906 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:21.906 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-04-01T09:52:21.906 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:22.013 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 130/154 2026-04-01T09:52:22.417 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 131/154 2026-04-01T09:52:22.476 INFO:teuthology.orchestra.run.vm03.stdout: Installing : cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 131/154 2026-04-01T09:52:22.500 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-prometheus-alerts-2:20.2.0-8.g0597158282e.e 132/154 2026-04-01T09:52:22.560 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-grafana-dashboards-2:20.2.0-8.g0597158282e. 133/154 2026-04-01T09:52:22.587 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.cly 134/154 2026-04-01T09:52:22.767 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 124/154 2026-04-01T09:52:22.800 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 124/154 2026-04-01T09:52:22.800 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:22.800 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-04-01T09:52:22.800 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-04-01T09:52:22.800 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-04-01T09:52:22.800 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:22.937 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 125/154 2026-04-01T09:52:22.966 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 125/154 2026-04-01T09:52:22.966 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:22.966 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-04-01T09:52:22.966 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-04-01T09:52:22.966 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-04-01T09:52:22.966 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:23.135 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x 122/154 2026-04-01T09:52:23.136 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /sys 2026-04-01T09:52:23.136 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /proc 2026-04-01T09:52:23.136 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /mnt 2026-04-01T09:52:23.136 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /var/tmp 2026-04-01T09:52:23.136 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /home 2026-04-01T09:52:23.136 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /root 2026-04-01T09:52:23.136 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /tmp 2026-04-01T09:52:23.136 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:23.248 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 123/154 2026-04-01T09:52:23.249 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 126/154 2026-04-01T09:52:23.273 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 123/154 2026-04-01T09:52:23.273 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:23.273 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-04-01T09:52:23.273 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:23.278 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 126/154 2026-04-01T09:52:23.278 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:23.278 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-04-01T09:52:23.278 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-04-01T09:52:23.278 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-04-01T09:52:23.278 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:23.322 INFO:teuthology.orchestra.run.vm07.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 127/154 2026-04-01T09:52:23.386 INFO:teuthology.orchestra.run.vm07.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 128/154 2026-04-01T09:52:23.499 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 129/154 2026-04-01T09:52:23.539 INFO:teuthology.orchestra.run.vm07.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 130/154 2026-04-01T09:52:23.893 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 131/154 2026-04-01T09:52:23.898 INFO:teuthology.orchestra.run.vm07.stdout: Installing : cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 131/154 2026-04-01T09:52:23.908 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-prometheus-alerts-2:20.2.0-8.g0597158282e.e 132/154 2026-04-01T09:52:23.942 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-grafana-dashboards-2:20.2.0-8.g0597158282e. 133/154 2026-04-01T09:52:23.946 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.cly 134/154 2026-04-01T09:52:24.128 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 124/154 2026-04-01T09:52:24.162 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 124/154 2026-04-01T09:52:24.162 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:24.162 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-04-01T09:52:24.162 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-04-01T09:52:24.162 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-04-01T09:52:24.162 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:24.277 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 125/154 2026-04-01T09:52:24.302 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 125/154 2026-04-01T09:52:24.302 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:24.302 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-04-01T09:52:24.302 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-04-01T09:52:24.302 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-04-01T09:52:24.302 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:24.507 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.cly 134/154 2026-04-01T09:52:24.525 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.c 135/154 2026-04-01T09:52:24.560 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 126/154 2026-04-01T09:52:24.586 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 126/154 2026-04-01T09:52:24.586 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:24.586 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-04-01T09:52:24.586 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-04-01T09:52:24.586 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-04-01T09:52:24.586 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:24.625 INFO:teuthology.orchestra.run.vm00.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 127/154 2026-04-01T09:52:24.687 INFO:teuthology.orchestra.run.vm00.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 128/154 2026-04-01T09:52:24.702 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 129/154 2026-04-01T09:52:24.706 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 130/154 2026-04-01T09:52:24.784 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 131/154 2026-04-01T09:52:24.789 INFO:teuthology.orchestra.run.vm00.stdout: Installing : cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 131/154 2026-04-01T09:52:24.799 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-prometheus-alerts-2:20.2.0-8.g0597158282e.e 132/154 2026-04-01T09:52:24.830 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-grafana-dashboards-2:20.2.0-8.g0597158282e. 133/154 2026-04-01T09:52:24.834 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.cly 134/154 2026-04-01T09:52:25.096 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.c 135/154 2026-04-01T09:52:25.099 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-diskprediction-local-2:20.2.0-8.g059715 136/154 2026-04-01T09:52:25.118 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:20.2.0-8.g059715 136/154 2026-04-01T09:52:25.120 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.c 137/154 2026-04-01T09:52:25.198 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.c 137/154 2026-04-01T09:52:25.256 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-modules-core-2:20.2.0-8.g0597158282e.el 138/154 2026-04-01T09:52:25.259 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 139/154 2026-04-01T09:52:25.290 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 139/154 2026-04-01T09:52:25.290 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:25.290 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-04-01T09:52:25.291 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-04-01T09:52:25.291 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-04-01T09:52:25.291 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:25.305 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso. 140/154 2026-04-01T09:52:25.318 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso. 140/154 2026-04-01T09:52:25.901 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.cly 134/154 2026-04-01T09:52:25.918 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.c 135/154 2026-04-01T09:52:26.518 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.c 135/154 2026-04-01T09:52:26.522 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mgr-diskprediction-local-2:20.2.0-8.g059715 136/154 2026-04-01T09:52:26.543 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:20.2.0-8.g059715 136/154 2026-04-01T09:52:26.544 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.c 137/154 2026-04-01T09:52:26.609 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 141/154 2026-04-01T09:52:26.627 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.c 137/154 2026-04-01T09:52:26.633 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x 142/154 2026-04-01T09:52:26.660 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x 142/154 2026-04-01T09:52:26.660 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:26.660 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-04-01T09:52:26.660 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-04-01T09:52:26.660 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-04-01T09:52:26.660 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:26.673 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-immutable-object-cache-2:20.2.0-8.g05971582 143/154 2026-04-01T09:52:26.682 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mgr-modules-core-2:20.2.0-8.g0597158282e.el 138/154 2026-04-01T09:52:26.685 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 139/154 2026-04-01T09:52:26.699 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-8.g05971582 143/154 2026-04-01T09:52:26.699 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:26.699 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-04-01T09:52:26.699 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:26.713 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 139/154 2026-04-01T09:52:26.713 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:26.713 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-04-01T09:52:26.713 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-04-01T09:52:26.713 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-04-01T09:52:26.713 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:26.730 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso. 140/154 2026-04-01T09:52:26.746 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso. 140/154 2026-04-01T09:52:26.860 INFO:teuthology.orchestra.run.vm03.stdout: Installing : rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 144/154 2026-04-01T09:52:26.885 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 144/154 2026-04-01T09:52:26.885 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:26.885 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-04-01T09:52:26.885 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-04-01T09:52:26.885 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-04-01T09:52:26.885 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:26.953 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.cly 134/154 2026-04-01T09:52:26.971 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.c 135/154 2026-04-01T09:52:27.529 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.c 135/154 2026-04-01T09:52:27.532 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-diskprediction-local-2:20.2.0-8.g059715 136/154 2026-04-01T09:52:27.551 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:20.2.0-8.g059715 136/154 2026-04-01T09:52:27.552 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.c 137/154 2026-04-01T09:52:27.631 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.c 137/154 2026-04-01T09:52:27.686 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-modules-core-2:20.2.0-8.g0597158282e.el 138/154 2026-04-01T09:52:27.689 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 139/154 2026-04-01T09:52:27.718 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 139/154 2026-04-01T09:52:27.718 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:27.718 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-04-01T09:52:27.718 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-04-01T09:52:27.718 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-04-01T09:52:27.718 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:27.737 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso. 140/154 2026-04-01T09:52:27.752 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso. 140/154 2026-04-01T09:52:27.993 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 141/154 2026-04-01T09:52:27.997 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x 142/154 2026-04-01T09:52:28.025 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x 142/154 2026-04-01T09:52:28.025 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:28.025 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-04-01T09:52:28.025 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-04-01T09:52:28.025 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-04-01T09:52:28.025 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:28.041 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-immutable-object-cache-2:20.2.0-8.g05971582 143/154 2026-04-01T09:52:28.069 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-8.g05971582 143/154 2026-04-01T09:52:28.069 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:28.069 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-04-01T09:52:28.069 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:28.223 INFO:teuthology.orchestra.run.vm07.stdout: Installing : rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 144/154 2026-04-01T09:52:28.252 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 144/154 2026-04-01T09:52:28.252 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:28.252 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-04-01T09:52:28.252 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-04-01T09:52:28.252 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-04-01T09:52:28.252 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:29.000 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 141/154 2026-04-01T09:52:29.004 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x 142/154 2026-04-01T09:52:29.038 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x 142/154 2026-04-01T09:52:29.038 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:29.038 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-04-01T09:52:29.038 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-04-01T09:52:29.039 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-04-01T09:52:29.039 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:29.052 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-immutable-object-cache-2:20.2.0-8.g05971582 143/154 2026-04-01T09:52:29.078 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-8.g05971582 143/154 2026-04-01T09:52:29.078 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:29.078 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-04-01T09:52:29.078 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:29.245 INFO:teuthology.orchestra.run.vm00.stdout: Installing : rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 144/154 2026-04-01T09:52:29.273 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 144/154 2026-04-01T09:52:29.273 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T09:52:29.273 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-04-01T09:52:29.273 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-04-01T09:52:29.273 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-04-01T09:52:29.274 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:31.534 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-test-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 145/154 2026-04-01T09:52:31.570 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 146/154 2026-04-01T09:52:31.579 INFO:teuthology.orchestra.run.vm03.stdout: Installing : perl-Test-Harness-1:3.42-461.el9.noarch 147/154 2026-04-01T09:52:31.587 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libcephfs-devel-2:20.2.0-8.g0597158282e.el9.clys 148/154 2026-04-01T09:52:31.599 INFO:teuthology.orchestra.run.vm03.stdout: Installing : rbd-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 149/154 2026-04-01T09:52:31.607 INFO:teuthology.orchestra.run.vm03.stdout: Installing : rbd-nbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 150/154 2026-04-01T09:52:31.627 INFO:teuthology.orchestra.run.vm03.stdout: Installing : bzip2-1.0.8-10.el9_5.x86_64 151/154 2026-04-01T09:52:31.632 INFO:teuthology.orchestra.run.vm03.stdout: Installing : s3cmd-2.4.0-1.el9.noarch 152/154 2026-04-01T09:52:31.632 INFO:teuthology.orchestra.run.vm03.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 153/154 2026-04-01T09:52:31.655 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 153/154 2026-04-01T09:52:31.655 INFO:teuthology.orchestra.run.vm03.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 154/154 2026-04-01T09:52:33.048 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-test-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 145/154 2026-04-01T09:52:33.089 INFO:teuthology.orchestra.run.vm07.stdout: Installing : ceph-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 146/154 2026-04-01T09:52:33.099 INFO:teuthology.orchestra.run.vm07.stdout: Installing : perl-Test-Harness-1:3.42-461.el9.noarch 147/154 2026-04-01T09:52:33.107 INFO:teuthology.orchestra.run.vm07.stdout: Installing : libcephfs-devel-2:20.2.0-8.g0597158282e.el9.clys 148/154 2026-04-01T09:52:33.120 INFO:teuthology.orchestra.run.vm07.stdout: Installing : rbd-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 149/154 2026-04-01T09:52:33.132 INFO:teuthology.orchestra.run.vm07.stdout: Installing : rbd-nbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 150/154 2026-04-01T09:52:33.323 INFO:teuthology.orchestra.run.vm07.stdout: Installing : bzip2-1.0.8-10.el9_5.x86_64 151/154 2026-04-01T09:52:33.328 INFO:teuthology.orchestra.run.vm07.stdout: Installing : s3cmd-2.4.0-1.el9.noarch 152/154 2026-04-01T09:52:33.328 INFO:teuthology.orchestra.run.vm07.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 153/154 2026-04-01T09:52:33.349 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 153/154 2026-04-01T09:52:33.349 INFO:teuthology.orchestra.run.vm07.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 154/154 2026-04-01T09:52:33.357 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 154/154 2026-04-01T09:52:33.357 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/154 2026-04-01T09:52:33.357 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 2/154 2026-04-01T09:52:33.357 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 3/154 2026-04-01T09:52:33.357 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 4/154 2026-04-01T09:52:33.357 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-immutable-object-cache-2:20.2.0-8.g05971582 5/154 2026-04-01T09:52:33.357 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 6/154 2026-04-01T09:52:33.357 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 7/154 2026-04-01T09:52:33.357 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 8/154 2026-04-01T09:52:33.357 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 9/154 2026-04-01T09:52:33.357 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x 10/154 2026-04-01T09:52:33.357 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x 11/154 2026-04-01T09:52:33.357 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-test-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 12/154 2026-04-01T09:52:33.357 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephfs-daemon-2:20.2.0-8.g0597158282e.el9.cly 13/154 2026-04-01T09:52:33.357 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephfs-devel-2:20.2.0-8.g0597158282e.el9.clys 14/154 2026-04-01T09:52:33.357 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.cly 15/154 2026-04-01T09:52:33.357 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86 16/154 2026-04-01T09:52:33.357 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso. 17/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librados-devel-2:20.2.0-8.g0597158282e.el9.clyso 18/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libradosstriper1-2:20.2.0-8.g0597158282e.el9.cly 19/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 20/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ceph-argparse-2:20.2.0-8.g0597158282e.el 21/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ceph-common-2:20.2.0-8.g0597158282e.el9. 22/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cephfs-2:20.2.0-8.g0597158282e.el9.clyso 23/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rados-2:20.2.0-8.g0597158282e.el9.clyso. 24/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rbd-2:20.2.0-8.g0597158282e.el9.clyso.x8 25/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rgw-2:20.2.0-8.g0597158282e.el9.clyso.x8 26/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 27/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 28/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-nbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 29/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-grafana-dashboards-2:20.2.0-8.g0597158282e. 30/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.cly 31/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.c 32/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-diskprediction-local-2:20.2.0-8.g059715 33/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.c 34/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-modules-core-2:20.2.0-8.g0597158282e.el 35/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso. 36/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-prometheus-alerts-2:20.2.0-8.g0597158282e.e 37/154 2026-04-01T09:52:33.358 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 38/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 39/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 40/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 41/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 42/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 43/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 44/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 45/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 46/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 47/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 48/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 49/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 50/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 51/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 52/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 53/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 54/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cheroot-10.0.1-5.el9.noarch 55/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cherrypy-18.10.0-5.el9.noarch 56/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 57/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 58/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 59/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-influxdb-5.3.1-1.el9.noarch 60/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-isodate-0.6.1-3.el9.noarch 61/154 2026-04-01T09:52:33.361 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 62/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 63/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 64/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 65/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 66/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 67/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 68/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 69/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-msgpack-1.0.3-2.el9.x86_64 70/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 71/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 72/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 73/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 74/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 75/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 76/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-saml-1.16.0-1.el9.noarch 77/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 78/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 79/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 80/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-xmlsec-1.3.13-1.el9.x86_64 81/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 82/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 83/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : re2-1:20211101-20.el9.x86_64 84/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : s3cmd-2.4.0-1.el9.noarch 85/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 86/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : bzip2-1.0.8-10.el9_5.x86_64 87/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : c-ares-1.19.1-2.el9_4.x86_64 88/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : cryptsetup-2.7.2-4.el9.x86_64 89/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : fuse-2.9.9-17.el9.x86_64 90/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 91/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 92/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libgfortran-11.5.0-11.el9.x86_64 93/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libquadmath-11.5.0-11.el9.x86_64 94/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lmdb-libs-0.9.29-3.el9.x86_64 95/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : mailcap-2.1.49-5.el9.0.2.noarch 96/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : nvme-cli-2.13-1.el9.x86_64 97/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 98/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 99/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cryptography-36.0.1-5.el9_6.x86_64 100/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ply-3.11-14.el9.0.1.noarch 101/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 102/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.0.1.noarch 103/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-requests-2.25.1-10.el9_6.noarch 104/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-urllib3-1.26.5-6.el9_7.1.noarch 105/154 2026-04-01T09:52:33.362 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : smartmontools-1:7.2-9.el9.x86_64 106/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : unzip-6.0-59.el9.x86_64 107/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : zip-3.0-35.el9.x86_64 108/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : boost-program-options-1.75.0-13.el9_7.x86_64 109/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-3.0.4-8.el9.0.1.x86_64 110/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-netlib-3.0.4-8.el9.0.1.x86_64 111/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-8.el9.0.1.x86_64 112/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 113/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 114/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 115/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 116/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 117/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libxslt-1.1.34-13.el9_6.x86_64 118/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 119/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lua-5.4.4-4.el9.x86_64 120/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 121/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 122/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : perl-Benchmark-1.23-481.1.el9_6.noarch 123/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : perl-Test-Harness-1:3.42-461.el9.noarch 124/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : protobuf-3.14.0-17.el9_7.x86_64 125/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 126/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-devel-3.9.23-2.el9.x86_64 127/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jinja2-2.11.3-8.el9_5.noarch 128/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jmespath-1.0.1-1.el9_7.noarch 129/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 130/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-lxml-4.6.5-3.el9.x86_64 131/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 132/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-numpy-1:1.23.5-2.el9_7.x86_64 133/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9_7.x86_64 134/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 135/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-protobuf-3.14.0-17.el9_7.noarch 136/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyasn1-0.4.8-7.el9_7.noarch 137/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9_7.noarch 138/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 139/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 140/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-toml-0.10.2-6.el9.0.1.noarch 141/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatlib-24.09.0-1.el9.x86_64 142/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatlib-service-24.09.0-1.el9.x86_64 143/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 144/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 145/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : xmlsec1-1.2.29-13.el9.x86_64 146/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : xmlsec1-openssl-1.2.29-13.el9.x86_64 147/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 148/154 2026-04-01T09:52:33.363 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 149/154 2026-04-01T09:52:33.364 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : protobuf-compiler-3.14.0-17.el9_7.x86_64 150/154 2026-04-01T09:52:33.364 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 151/154 2026-04-01T09:52:33.364 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 152/154 2026-04-01T09:52:33.364 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 153/154 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 154/154 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout:Upgraded: 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout:Installed: 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: boost-program-options-1.75.0-13.el9_7.x86_64 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: bzip2-1.0.8-10.el9_5.x86_64 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: c-ares-1.19.1-2.el9_4.x86_64 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-grafana-dashboards-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-immutable-object-cache-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-diskprediction-local-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-prometheus-alerts-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-test-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: cryptsetup-2.7.2-4.el9.x86_64 2026-04-01T09:52:33.468 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-3.0.4-8.el9.0.1.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-netlib-3.0.4-8.el9.0.1.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-openblas-openmp-3.0.4-8.el9.0.1.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: fuse-2.9.9-17.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: grpc-data-1.46.7-10.el9.noarch 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-9.0.0-15.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-daemon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: libconfig-1.7.2-9.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: libgfortran-11.5.0-11.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: libnbd-1.20.3-4.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: liboath-2.6.12-1.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: libquadmath-11.5.0-11.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: librados-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: libunwind-1.6.2-1.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: libxslt-1.1.34-13.el9_6.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: lmdb-libs-0.9.29-3.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: lua-5.4.4-4.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: luarocks-3.9.2-5.el9.noarch 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: mailcap-2.1.49-5.el9.0.2.noarch 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: nvme-cli-2.13-1.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: openblas-0.3.29-1.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: pciutils-3.7.0-7.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: perl-Benchmark-1.23-481.1.el9_6.noarch 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: perl-Test-Harness-1:3.42-461.el9.noarch 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-3.14.0-17.el9_7.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-compiler-3.14.0-17.el9_7.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: python3-babel-2.9.1-2.el9.noarch 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-04-01T09:52:33.469 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-cheroot-10.0.1-5.el9.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy-18.10.0-5.el9.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-cryptography-36.0.1-5.el9_6.x86_64 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-devel-3.9.23-2.el9.x86_64 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-influxdb-5.3.1-1.el9.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-isodate-0.6.1-3.el9.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-jinja2-2.11.3-8.el9_5.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-jmespath-1.0.1-1.el9_7.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-lxml-4.6.5-3.el9.x86_64 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-msgpack-1.0.3-2.el9.x86_64 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-1:1.23.5-2.el9_7.x86_64 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-f2py-1:1.23.5-2.el9_7.x86_64 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-packaging-20.9-5.el9.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-ply-3.11-14.el9.0.1.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend-3.1.0-2.el9.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-protobuf-3.14.0-17.el9_7.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-0.4.8-7.el9_7.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-modules-0.4.8-7.el9_7.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-pycparser-2.20-6.el9.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyparsing-2.4.7-9.el9.0.1.noarch 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-rados-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.470 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-2.25.1-10.el9_6.noarch 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: python3-rgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes-2.5.1-5.el9.noarch 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa-4.9-2.el9.noarch 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: python3-saml-1.16.0-1.el9.noarch 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: python3-toml-0.10.2-6.el9.0.1.noarch 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: python3-urllib3-1.26.5-6.el9_7.1.noarch 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: python3-xmlsec-1.3.13-1.el9.x86_64 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-24.09.0-1.el9.x86_64 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-service-24.09.0-1.el9.x86_64 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: rbd-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: rbd-nbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: re2-1:20211101-20.el9.x86_64 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: s3cmd-2.4.0-1.el9.noarch 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools-1:7.2-9.el9.x86_64 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: socat-1.7.4.1-8.el9.x86_64 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: thrift-0.15.0-4.el9.x86_64 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: unzip-6.0-59.el9.x86_64 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: xmlsec1-1.2.29-13.el9.x86_64 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: xmlsec1-openssl-1.2.29-13.el9.x86_64 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: zip-3.0-35.el9.x86_64 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:33.471 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T09:52:33.569 DEBUG:teuthology.parallel:result is None 2026-04-01T09:52:34.064 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-test-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 145/154 2026-04-01T09:52:34.103 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 146/154 2026-04-01T09:52:34.112 INFO:teuthology.orchestra.run.vm00.stdout: Installing : perl-Test-Harness-1:3.42-461.el9.noarch 147/154 2026-04-01T09:52:34.121 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libcephfs-devel-2:20.2.0-8.g0597158282e.el9.clys 148/154 2026-04-01T09:52:34.133 INFO:teuthology.orchestra.run.vm00.stdout: Installing : rbd-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 149/154 2026-04-01T09:52:34.141 INFO:teuthology.orchestra.run.vm00.stdout: Installing : rbd-nbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 150/154 2026-04-01T09:52:34.162 INFO:teuthology.orchestra.run.vm00.stdout: Installing : bzip2-1.0.8-10.el9_5.x86_64 151/154 2026-04-01T09:52:34.224 INFO:teuthology.orchestra.run.vm00.stdout: Installing : s3cmd-2.4.0-1.el9.noarch 152/154 2026-04-01T09:52:34.224 INFO:teuthology.orchestra.run.vm00.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 153/154 2026-04-01T09:52:34.384 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 153/154 2026-04-01T09:52:34.384 INFO:teuthology.orchestra.run.vm00.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 154/154 2026-04-01T09:52:35.031 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 154/154 2026-04-01T09:52:35.031 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/154 2026-04-01T09:52:35.031 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 2/154 2026-04-01T09:52:35.031 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 3/154 2026-04-01T09:52:35.031 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 4/154 2026-04-01T09:52:35.031 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-immutable-object-cache-2:20.2.0-8.g05971582 5/154 2026-04-01T09:52:35.031 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 6/154 2026-04-01T09:52:35.031 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 7/154 2026-04-01T09:52:35.031 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 8/154 2026-04-01T09:52:35.031 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 9/154 2026-04-01T09:52:35.031 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x 10/154 2026-04-01T09:52:35.031 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x 11/154 2026-04-01T09:52:35.031 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-test-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 12/154 2026-04-01T09:52:35.031 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephfs-daemon-2:20.2.0-8.g0597158282e.el9.cly 13/154 2026-04-01T09:52:35.031 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephfs-devel-2:20.2.0-8.g0597158282e.el9.clys 14/154 2026-04-01T09:52:35.031 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.cly 15/154 2026-04-01T09:52:35.031 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86 16/154 2026-04-01T09:52:35.031 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso. 17/154 2026-04-01T09:52:35.031 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librados-devel-2:20.2.0-8.g0597158282e.el9.clyso 18/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libradosstriper1-2:20.2.0-8.g0597158282e.el9.cly 19/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 20/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-ceph-argparse-2:20.2.0-8.g0597158282e.el 21/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-ceph-common-2:20.2.0-8.g0597158282e.el9. 22/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cephfs-2:20.2.0-8.g0597158282e.el9.clyso 23/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rados-2:20.2.0-8.g0597158282e.el9.clyso. 24/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rbd-2:20.2.0-8.g0597158282e.el9.clyso.x8 25/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rgw-2:20.2.0-8.g0597158282e.el9.clyso.x8 26/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : rbd-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 27/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 28/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : rbd-nbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 29/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-grafana-dashboards-2:20.2.0-8.g0597158282e. 30/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.cly 31/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.c 32/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-diskprediction-local-2:20.2.0-8.g059715 33/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.c 34/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-modules-core-2:20.2.0-8.g0597158282e.el 35/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso. 36/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-prometheus-alerts-2:20.2.0-8.g0597158282e.e 37/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 38/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 39/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 40/154 2026-04-01T09:52:35.032 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 41/154 2026-04-01T09:52:35.035 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 42/154 2026-04-01T09:52:35.035 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 43/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 44/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 45/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 46/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 47/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 48/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 49/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 50/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 51/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 52/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 53/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 54/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cheroot-10.0.1-5.el9.noarch 55/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cherrypy-18.10.0-5.el9.noarch 56/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 57/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 58/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 59/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-influxdb-5.3.1-1.el9.noarch 60/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-isodate-0.6.1-3.el9.noarch 61/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 62/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 63/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 64/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 65/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 66/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 67/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 68/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 69/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-msgpack-1.0.3-2.el9.x86_64 70/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 71/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 72/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 73/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 74/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 75/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 76/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-saml-1.16.0-1.el9.noarch 77/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 78/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 79/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 80/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-xmlsec-1.3.13-1.el9.x86_64 81/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 82/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 83/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : re2-1:20211101-20.el9.x86_64 84/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : s3cmd-2.4.0-1.el9.noarch 85/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 86/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : bzip2-1.0.8-10.el9_5.x86_64 87/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : c-ares-1.19.1-2.el9_4.x86_64 88/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : cryptsetup-2.7.2-4.el9.x86_64 89/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : fuse-2.9.9-17.el9.x86_64 90/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 91/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 92/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libgfortran-11.5.0-11.el9.x86_64 93/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libquadmath-11.5.0-11.el9.x86_64 94/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : lmdb-libs-0.9.29-3.el9.x86_64 95/154 2026-04-01T09:52:35.036 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : mailcap-2.1.49-5.el9.0.2.noarch 96/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : nvme-cli-2.13-1.el9.x86_64 97/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 98/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 99/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cryptography-36.0.1-5.el9_6.x86_64 100/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-ply-3.11-14.el9.0.1.noarch 101/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 102/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.0.1.noarch 103/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-requests-2.25.1-10.el9_6.noarch 104/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-urllib3-1.26.5-6.el9_7.1.noarch 105/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : smartmontools-1:7.2-9.el9.x86_64 106/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : unzip-6.0-59.el9.x86_64 107/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : zip-3.0-35.el9.x86_64 108/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : boost-program-options-1.75.0-13.el9_7.x86_64 109/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : flexiblas-3.0.4-8.el9.0.1.x86_64 110/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : flexiblas-netlib-3.0.4-8.el9.0.1.x86_64 111/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-8.el9.0.1.x86_64 112/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 113/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 114/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 115/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 116/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 117/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libxslt-1.1.34-13.el9_6.x86_64 118/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 119/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : lua-5.4.4-4.el9.x86_64 120/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 121/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 122/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : perl-Benchmark-1.23-481.1.el9_6.noarch 123/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : perl-Test-Harness-1:3.42-461.el9.noarch 124/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : protobuf-3.14.0-17.el9_7.x86_64 125/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 126/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-devel-3.9.23-2.el9.x86_64 127/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jinja2-2.11.3-8.el9_5.noarch 128/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jmespath-1.0.1-1.el9_7.noarch 129/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 130/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-lxml-4.6.5-3.el9.x86_64 131/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 132/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-numpy-1:1.23.5-2.el9_7.x86_64 133/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9_7.x86_64 134/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 135/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-protobuf-3.14.0-17.el9_7.noarch 136/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyasn1-0.4.8-7.el9_7.noarch 137/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9_7.noarch 138/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 139/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 140/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-toml-0.10.2-6.el9.0.1.noarch 141/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : qatlib-24.09.0-1.el9.x86_64 142/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : qatlib-service-24.09.0-1.el9.x86_64 143/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 144/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 145/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : xmlsec1-1.2.29-13.el9.x86_64 146/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : xmlsec1-openssl-1.2.29-13.el9.x86_64 147/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 148/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 149/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : protobuf-compiler-3.14.0-17.el9_7.x86_64 150/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 151/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 152/154 2026-04-01T09:52:35.037 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 153/154 2026-04-01T09:52:35.244 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 154/154 2026-04-01T09:52:35.244 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:35.244 INFO:teuthology.orchestra.run.vm07.stdout:Upgraded: 2026-04-01T09:52:35.244 INFO:teuthology.orchestra.run.vm07.stdout: librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.244 INFO:teuthology.orchestra.run.vm07.stdout: librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.244 INFO:teuthology.orchestra.run.vm07.stdout:Installed: 2026-04-01T09:52:35.244 INFO:teuthology.orchestra.run.vm07.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-04-01T09:52:35.244 INFO:teuthology.orchestra.run.vm07.stdout: boost-program-options-1.75.0-13.el9_7.x86_64 2026-04-01T09:52:35.244 INFO:teuthology.orchestra.run.vm07.stdout: bzip2-1.0.8-10.el9_5.x86_64 2026-04-01T09:52:35.244 INFO:teuthology.orchestra.run.vm07.stdout: c-ares-1.19.1-2.el9_4.x86_64 2026-04-01T09:52:35.244 INFO:teuthology.orchestra.run.vm07.stdout: ceph-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.244 INFO:teuthology.orchestra.run.vm07.stdout: ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.244 INFO:teuthology.orchestra.run.vm07.stdout: ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.244 INFO:teuthology.orchestra.run.vm07.stdout: ceph-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.244 INFO:teuthology.orchestra.run.vm07.stdout: ceph-grafana-dashboards-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:35.244 INFO:teuthology.orchestra.run.vm07.stdout: ceph-immutable-object-cache-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.244 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-diskprediction-local-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: ceph-prometheus-alerts-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: ceph-test-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: cryptsetup-2.7.2-4.el9.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-3.0.4-8.el9.0.1.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-netlib-3.0.4-8.el9.0.1.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-openblas-openmp-3.0.4-8.el9.0.1.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: fuse-2.9.9-17.el9.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: grpc-data-1.46.7-10.el9.noarch 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: libarrow-9.0.0-15.el9.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-daemon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: libconfig-1.7.2-9.el9.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: libgfortran-11.5.0-11.el9.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: libnbd-1.20.3-4.el9.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: liboath-2.6.12-1.el9.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: libquadmath-11.5.0-11.el9.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: librados-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: libunwind-1.6.2-1.el9.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: libxslt-1.1.34-13.el9_6.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: lmdb-libs-0.9.29-3.el9.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: lua-5.4.4-4.el9.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: luarocks-3.9.2-5.el9.noarch 2026-04-01T09:52:35.245 INFO:teuthology.orchestra.run.vm07.stdout: mailcap-2.1.49-5.el9.0.2.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: nvme-cli-2.13-1.el9.x86_64 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: openblas-0.3.29-1.el9.x86_64 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: pciutils-3.7.0-7.el9.x86_64 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: perl-Benchmark-1.23-481.1.el9_6.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: perl-Test-Harness-1:3.42-461.el9.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: protobuf-3.14.0-17.el9_7.x86_64 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: protobuf-compiler-3.14.0-17.el9_7.x86_64 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-babel-2.9.1-2.el9.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-argparse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-cephfs-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-cheroot-10.0.1-5.el9.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy-18.10.0-5.el9.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-cryptography-36.0.1-5.el9_6.x86_64 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-devel-3.9.23-2.el9.x86_64 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-influxdb-5.3.1-1.el9.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-isodate-0.6.1-3.el9.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-jinja2-2.11.3-8.el9_5.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-jmespath-1.0.1-1.el9_7.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-04-01T09:52:35.246 INFO:teuthology.orchestra.run.vm07.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-lxml-4.6.5-3.el9.x86_64 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-msgpack-1.0.3-2.el9.x86_64 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy-1:1.23.5-2.el9_7.x86_64 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy-f2py-1:1.23.5-2.el9_7.x86_64 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-packaging-20.9-5.el9.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-ply-3.11-14.el9.0.1.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-portend-3.1.0-2.el9.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-protobuf-3.14.0-17.el9_7.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1-0.4.8-7.el9_7.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1-modules-0.4.8-7.el9_7.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-pycparser-2.20-6.el9.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyparsing-2.4.7-9.el9.0.1.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-rados-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-rbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-2.25.1-10.el9_6.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-rgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes-2.5.1-5.el9.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-rsa-4.9-2.el9.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-saml-1.16.0-1.el9.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-toml-0.10.2-6.el9.0.1.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-urllib3-1.26.5-6.el9_7.1.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-xmlsec-1.3.13-1.el9.x86_64 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: qatlib-24.09.0-1.el9.x86_64 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: qatlib-service-24.09.0-1.el9.x86_64 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: rbd-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.247 INFO:teuthology.orchestra.run.vm07.stdout: rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.248 INFO:teuthology.orchestra.run.vm07.stdout: rbd-nbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:35.248 INFO:teuthology.orchestra.run.vm07.stdout: re2-1:20211101-20.el9.x86_64 2026-04-01T09:52:35.248 INFO:teuthology.orchestra.run.vm07.stdout: s3cmd-2.4.0-1.el9.noarch 2026-04-01T09:52:35.248 INFO:teuthology.orchestra.run.vm07.stdout: smartmontools-1:7.2-9.el9.x86_64 2026-04-01T09:52:35.248 INFO:teuthology.orchestra.run.vm07.stdout: socat-1.7.4.1-8.el9.x86_64 2026-04-01T09:52:35.248 INFO:teuthology.orchestra.run.vm07.stdout: thrift-0.15.0-4.el9.x86_64 2026-04-01T09:52:35.248 INFO:teuthology.orchestra.run.vm07.stdout: unzip-6.0-59.el9.x86_64 2026-04-01T09:52:35.248 INFO:teuthology.orchestra.run.vm07.stdout: xmlsec1-1.2.29-13.el9.x86_64 2026-04-01T09:52:35.248 INFO:teuthology.orchestra.run.vm07.stdout: xmlsec1-openssl-1.2.29-13.el9.x86_64 2026-04-01T09:52:35.248 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-04-01T09:52:35.248 INFO:teuthology.orchestra.run.vm07.stdout: zip-3.0-35.el9.x86_64 2026-04-01T09:52:35.248 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:35.248 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T09:52:35.354 DEBUG:teuthology.parallel:result is None 2026-04-01T09:52:36.365 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 154/154 2026-04-01T09:52:36.365 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/154 2026-04-01T09:52:36.366 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 2/154 2026-04-01T09:52:36.366 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 3/154 2026-04-01T09:52:36.366 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 4/154 2026-04-01T09:52:36.366 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-immutable-object-cache-2:20.2.0-8.g05971582 5/154 2026-04-01T09:52:36.366 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 6/154 2026-04-01T09:52:36.366 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 7/154 2026-04-01T09:52:36.366 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 8/154 2026-04-01T09:52:36.366 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 9/154 2026-04-01T09:52:36.366 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x 10/154 2026-04-01T09:52:36.366 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x 11/154 2026-04-01T09:52:36.366 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-test-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 12/154 2026-04-01T09:52:36.366 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs-daemon-2:20.2.0-8.g0597158282e.el9.cly 13/154 2026-04-01T09:52:36.366 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs-devel-2:20.2.0-8.g0597158282e.el9.clys 14/154 2026-04-01T09:52:36.366 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.cly 15/154 2026-04-01T09:52:36.366 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86 16/154 2026-04-01T09:52:36.366 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso. 17/154 2026-04-01T09:52:36.366 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librados-devel-2:20.2.0-8.g0597158282e.el9.clyso 18/154 2026-04-01T09:52:36.366 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libradosstriper1-2:20.2.0-8.g0597158282e.el9.cly 19/154 2026-04-01T09:52:36.366 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 20/154 2026-04-01T09:52:36.366 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ceph-argparse-2:20.2.0-8.g0597158282e.el 21/154 2026-04-01T09:52:36.367 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ceph-common-2:20.2.0-8.g0597158282e.el9. 22/154 2026-04-01T09:52:36.367 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cephfs-2:20.2.0-8.g0597158282e.el9.clyso 23/154 2026-04-01T09:52:36.367 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rados-2:20.2.0-8.g0597158282e.el9.clyso. 24/154 2026-04-01T09:52:36.367 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rbd-2:20.2.0-8.g0597158282e.el9.clyso.x8 25/154 2026-04-01T09:52:36.368 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rgw-2:20.2.0-8.g0597158282e.el9.clyso.x8 26/154 2026-04-01T09:52:36.368 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 27/154 2026-04-01T09:52:36.368 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 28/154 2026-04-01T09:52:36.368 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-nbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 29/154 2026-04-01T09:52:36.368 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-grafana-dashboards-2:20.2.0-8.g0597158282e. 30/154 2026-04-01T09:52:36.368 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.cly 31/154 2026-04-01T09:52:36.368 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.c 32/154 2026-04-01T09:52:36.368 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-diskprediction-local-2:20.2.0-8.g059715 33/154 2026-04-01T09:52:36.368 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.c 34/154 2026-04-01T09:52:36.368 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-modules-core-2:20.2.0-8.g0597158282e.el 35/154 2026-04-01T09:52:36.368 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso. 36/154 2026-04-01T09:52:36.368 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-prometheus-alerts-2:20.2.0-8.g0597158282e.e 37/154 2026-04-01T09:52:36.368 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 38/154 2026-04-01T09:52:36.368 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 39/154 2026-04-01T09:52:36.368 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 40/154 2026-04-01T09:52:36.368 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 41/154 2026-04-01T09:52:36.368 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 42/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 43/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 44/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 45/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 46/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 47/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 48/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 49/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 50/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 51/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 52/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 53/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 54/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cheroot-10.0.1-5.el9.noarch 55/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cherrypy-18.10.0-5.el9.noarch 56/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 57/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 58/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 59/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-influxdb-5.3.1-1.el9.noarch 60/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-isodate-0.6.1-3.el9.noarch 61/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 62/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 63/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 64/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 65/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 66/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 67/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 68/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 69/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-msgpack-1.0.3-2.el9.x86_64 70/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 71/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 72/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 73/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 74/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 75/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 76/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-saml-1.16.0-1.el9.noarch 77/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 78/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 79/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 80/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-xmlsec-1.3.13-1.el9.x86_64 81/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 82/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 83/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : re2-1:20211101-20.el9.x86_64 84/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : s3cmd-2.4.0-1.el9.noarch 85/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 86/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : bzip2-1.0.8-10.el9_5.x86_64 87/154 2026-04-01T09:52:36.369 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : c-ares-1.19.1-2.el9_4.x86_64 88/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : cryptsetup-2.7.2-4.el9.x86_64 89/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : fuse-2.9.9-17.el9.x86_64 90/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 91/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 92/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libgfortran-11.5.0-11.el9.x86_64 93/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libquadmath-11.5.0-11.el9.x86_64 94/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lmdb-libs-0.9.29-3.el9.x86_64 95/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : mailcap-2.1.49-5.el9.0.2.noarch 96/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : nvme-cli-2.13-1.el9.x86_64 97/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 98/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 99/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cryptography-36.0.1-5.el9_6.x86_64 100/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ply-3.11-14.el9.0.1.noarch 101/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 102/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.0.1.noarch 103/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-requests-2.25.1-10.el9_6.noarch 104/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-urllib3-1.26.5-6.el9_7.1.noarch 105/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : smartmontools-1:7.2-9.el9.x86_64 106/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : unzip-6.0-59.el9.x86_64 107/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : zip-3.0-35.el9.x86_64 108/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : boost-program-options-1.75.0-13.el9_7.x86_64 109/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-3.0.4-8.el9.0.1.x86_64 110/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-netlib-3.0.4-8.el9.0.1.x86_64 111/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-8.el9.0.1.x86_64 112/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 113/154 2026-04-01T09:52:36.371 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 114/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 115/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 116/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 117/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libxslt-1.1.34-13.el9_6.x86_64 118/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 119/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lua-5.4.4-4.el9.x86_64 120/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 121/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 122/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : perl-Benchmark-1.23-481.1.el9_6.noarch 123/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : perl-Test-Harness-1:3.42-461.el9.noarch 124/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : protobuf-3.14.0-17.el9_7.x86_64 125/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 126/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-devel-3.9.23-2.el9.x86_64 127/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jinja2-2.11.3-8.el9_5.noarch 128/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jmespath-1.0.1-1.el9_7.noarch 129/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 130/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-lxml-4.6.5-3.el9.x86_64 131/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 132/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-numpy-1:1.23.5-2.el9_7.x86_64 133/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9_7.x86_64 134/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 135/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-protobuf-3.14.0-17.el9_7.noarch 136/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyasn1-0.4.8-7.el9_7.noarch 137/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9_7.noarch 138/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 139/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 140/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-toml-0.10.2-6.el9.0.1.noarch 141/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatlib-24.09.0-1.el9.x86_64 142/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatlib-service-24.09.0-1.el9.x86_64 143/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 144/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 145/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : xmlsec1-1.2.29-13.el9.x86_64 146/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : xmlsec1-openssl-1.2.29-13.el9.x86_64 147/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 148/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 149/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : protobuf-compiler-3.14.0-17.el9_7.x86_64 150/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 151/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 152/154 2026-04-01T09:52:36.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 153/154 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 154/154 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout:Upgraded: 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout:Installed: 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: boost-program-options-1.75.0-13.el9_7.x86_64 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: bzip2-1.0.8-10.el9_5.x86_64 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: c-ares-1.19.1-2.el9_4.x86_64 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-grafana-dashboards-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-immutable-object-cache-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-diskprediction-local-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-prometheus-alerts-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-test-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.475 INFO:teuthology.orchestra.run.vm00.stdout: ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: cryptsetup-2.7.2-4.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-3.0.4-8.el9.0.1.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-netlib-3.0.4-8.el9.0.1.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-openblas-openmp-3.0.4-8.el9.0.1.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: fuse-2.9.9-17.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: grpc-data-1.46.7-10.el9.noarch 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-9.0.0-15.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-daemon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: libconfig-1.7.2-9.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: libgfortran-11.5.0-11.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: libnbd-1.20.3-4.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: liboath-2.6.12-1.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: libquadmath-11.5.0-11.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: librados-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: libunwind-1.6.2-1.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: libxslt-1.1.34-13.el9_6.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: lmdb-libs-0.9.29-3.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: lua-5.4.4-4.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: luarocks-3.9.2-5.el9.noarch 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: mailcap-2.1.49-5.el9.0.2.noarch 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: nvme-cli-2.13-1.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: openblas-0.3.29-1.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: pciutils-3.7.0-7.el9.x86_64 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: perl-Benchmark-1.23-481.1.el9_6.noarch 2026-04-01T09:52:36.476 INFO:teuthology.orchestra.run.vm00.stdout: perl-Test-Harness-1:3.42-461.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-3.14.0-17.el9_7.x86_64 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-compiler-3.14.0-17.el9_7.x86_64 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-babel-2.9.1-2.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-argparse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-cephfs-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-cheroot-10.0.1-5.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy-18.10.0-5.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-cryptography-36.0.1-5.el9_6.x86_64 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-devel-3.9.23-2.el9.x86_64 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-influxdb-5.3.1-1.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-isodate-0.6.1-3.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-jinja2-2.11.3-8.el9_5.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-jmespath-1.0.1-1.el9_7.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-lxml-4.6.5-3.el9.x86_64 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-msgpack-1.0.3-2.el9.x86_64 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-1:1.23.5-2.el9_7.x86_64 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-f2py-1:1.23.5-2.el9_7.x86_64 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-packaging-20.9-5.el9.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-ply-3.11-14.el9.0.1.noarch 2026-04-01T09:52:36.477 INFO:teuthology.orchestra.run.vm00.stdout: python3-portend-3.1.0-2.el9.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-protobuf-3.14.0-17.el9_7.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-0.4.8-7.el9_7.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-modules-0.4.8-7.el9_7.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-pycparser-2.20-6.el9.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyparsing-2.4.7-9.el9.0.1.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-rados-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-rbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-2.25.1-10.el9_6.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-rgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes-2.5.1-5.el9.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa-4.9-2.el9.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-saml-1.16.0-1.el9.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml-0.10.2-6.el9.0.1.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-urllib3-1.26.5-6.el9_7.1.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-xmlsec-1.3.13-1.el9.x86_64 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-24.09.0-1.el9.x86_64 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-service-24.09.0-1.el9.x86_64 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: rbd-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: rbd-nbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: re2-1:20211101-20.el9.x86_64 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: s3cmd-2.4.0-1.el9.noarch 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools-1:7.2-9.el9.x86_64 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: socat-1.7.4.1-8.el9.x86_64 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: thrift-0.15.0-4.el9.x86_64 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: unzip-6.0-59.el9.x86_64 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: xmlsec1-1.2.29-13.el9.x86_64 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: xmlsec1-openssl-1.2.29-13.el9.x86_64 2026-04-01T09:52:36.478 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-04-01T09:52:36.479 INFO:teuthology.orchestra.run.vm00.stdout: zip-3.0-35.el9.x86_64 2026-04-01T09:52:36.479 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:36.479 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T09:52:36.577 DEBUG:teuthology.parallel:result is None 2026-04-01T09:52:36.577 INFO:teuthology.task.install:Skipping version verification because we have custom repos... 2026-04-01T09:52:36.577 INFO:teuthology.task.install:Skipping version verification because we have custom repos... 2026-04-01T09:52:36.577 INFO:teuthology.task.install:Skipping version verification because we have custom repos... 2026-04-01T09:52:36.577 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-04-01T09:52:36.578 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:52:36.578 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-04-01T09:52:36.612 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:52:36.612 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-04-01T09:52:36.649 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-04-01T09:52:36.649 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-04-01T09:52:36.684 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-04-01T09:52:36.684 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:52:36.684 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/daemon-helper 2026-04-01T09:52:36.721 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-04-01T09:52:36.797 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:52:36.798 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/daemon-helper 2026-04-01T09:52:36.832 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-04-01T09:52:36.901 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-04-01T09:52:36.901 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/usr/bin/daemon-helper 2026-04-01T09:52:36.933 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-04-01T09:52:37.001 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-04-01T09:52:37.001 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:52:37.001 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/adjust-ulimits 2026-04-01T09:52:37.033 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-04-01T09:52:37.103 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:52:37.103 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/adjust-ulimits 2026-04-01T09:52:37.134 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-04-01T09:52:37.204 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-04-01T09:52:37.205 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/usr/bin/adjust-ulimits 2026-04-01T09:52:37.230 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-04-01T09:52:37.295 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-04-01T09:52:37.296 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:52:37.296 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/stdin-killer 2026-04-01T09:52:37.327 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-04-01T09:52:37.395 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:52:37.395 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/stdin-killer 2026-04-01T09:52:37.426 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-04-01T09:52:37.497 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-04-01T09:52:37.513 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/usr/bin/stdin-killer 2026-04-01T09:52:37.537 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-04-01T09:52:37.606 INFO:teuthology.run_tasks:Running task ceph... 2026-04-01T09:52:37.676 INFO:tasks.ceph:Making ceph log dir writeable by non-root... 2026-04-01T09:52:37.689 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 777 /var/log/ceph 2026-04-01T09:52:37.691 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 777 /var/log/ceph 2026-04-01T09:52:37.692 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod 777 /var/log/ceph 2026-04-01T09:52:37.720 INFO:tasks.ceph:Disabling ceph logrotate... 2026-04-01T09:52:37.720 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /etc/logrotate.d/ceph 2026-04-01T09:52:37.759 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /etc/logrotate.d/ceph 2026-04-01T09:52:37.761 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f -- /etc/logrotate.d/ceph 2026-04-01T09:52:37.789 INFO:tasks.ceph:Creating extra log directories... 2026-04-01T09:52:37.789 DEBUG:teuthology.orchestra.run.vm00:> sudo install -d -m0777 -- /var/log/ceph/valgrind /var/log/ceph/profiling-logger 2026-04-01T09:52:37.830 DEBUG:teuthology.orchestra.run.vm03:> sudo install -d -m0777 -- /var/log/ceph/valgrind /var/log/ceph/profiling-logger 2026-04-01T09:52:37.832 DEBUG:teuthology.orchestra.run.vm07:> sudo install -d -m0777 -- /var/log/ceph/valgrind /var/log/ceph/profiling-logger 2026-04-01T09:52:37.867 INFO:tasks.ceph:Creating ceph cluster ceph... 2026-04-01T09:52:37.867 INFO:tasks.ceph:config {'conf': {'client': {'debug rgw': 20, 'debug rgw dedup': 20, 'setgroup': 'ceph', 'setuser': 'ceph'}, 'global': {'osd_max_pg_log_entries': 10, 'osd_min_pg_log_entries': 10}, 'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'bdev async discard': True, 'bdev enable discard': True, 'bluestore allocator': 'bitmap', 'bluestore block size': 96636764160, 'bluestore fsck on mount': True, 'debug bluefs': '1/20', 'debug bluestore': '1/20', 'debug ms': 1, 'debug osd': 20, 'debug rocksdb': '4/10', 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd failsafe full ratio': 0.95, 'osd mclock iops capacity threshold hdd': 49000, 'osd objectstore': 'bluestore', 'osd shutdown pgref assert': True}}, 'fs': 'xfs', 'mkfs_options': None, 'mount_options': None, 'skip_mgr_daemons': False, 'log_ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', '\\(PG_AVAILABILITY\\)', '\\(PG_DEGRADED\\)', '\\(POOL_APP_NOT_ENABLED\\)', 'not have an application enabled'], 'cpu_profile': set(), 'cluster': 'ceph', 'mon_bind_msgr2': True, 'mon_bind_addrvec': True} 2026-04-01T09:52:37.867 INFO:tasks.ceph:ctx.config {'archive_path': '/archive/supriti-2026-04-01_09:45:36-rgw-wip-sse-s3-on-v20.2.0-none-default-vps/4721', 'branch': 'wip-sse-s3-on-v20.2.0', 'description': 'rgw/dedup/{beast bluestore-bitmap fixed-3-rgw ignore-pg-availability overrides supported-distros/{rocky_latest} tasks/{0-install test_dedup}}', 'email': None, 'first_in_suite': False, 'flavor': 'default', 'job_id': '4721', 'last_in_suite': False, 'machine_type': 'vps', 'name': 'supriti-2026-04-01_09:45:36-rgw-wip-sse-s3-on-v20.2.0-none-default-vps', 'no_nested_subset': False, 'openstack': [{'volumes': {'count': 4, 'size': 10}}], 'os_type': 'rocky', 'os_version': '9.7', 'overrides': {'admin_socket': {'branch': 'wip-sse-s3-on-v20.2.0'}, 'ansible.cephlab': {'branch': 'main', 'repo': 'https://github.com/kshtsk/ceph-cm-ansible.git', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'logical_volumes': {'lv_1': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_2': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_3': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_4': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}}, 'timezone': 'UTC', 'volume_groups': {'vg_nvme': {'pvs': '/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde'}}}}, 'ceph': {'conf': {'client': {'debug rgw': 20, 'debug rgw dedup': 20, 'setgroup': 'ceph', 'setuser': 'ceph'}, 'global': {'osd_max_pg_log_entries': 10, 'osd_min_pg_log_entries': 10}, 'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'bdev async discard': True, 'bdev enable discard': True, 'bluestore allocator': 'bitmap', 'bluestore block size': 96636764160, 'bluestore fsck on mount': True, 'debug bluefs': '1/20', 'debug bluestore': '1/20', 'debug ms': 1, 'debug osd': 20, 'debug rocksdb': '4/10', 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd failsafe full ratio': 0.95, 'osd mclock iops capacity threshold hdd': 49000, 'osd objectstore': 'bluestore', 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'fs': 'xfs', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', '\\(PG_AVAILABILITY\\)', '\\(PG_DEGRADED\\)', '\\(POOL_APP_NOT_ENABLED\\)', 'not have an application enabled'], 'sha1': '0597158282e6d69429e60df2354a6c8eed0e5bce'}, 'ceph-deploy': {'bluestore': True, 'conf': {'client': {'log file': '/var/log/ceph/ceph-$name.$pid.log'}, 'mon': {}, 'osd': {'bdev async discard': True, 'bdev enable discard': True, 'bluestore block size': 96636764160, 'bluestore fsck on mount': True, 'debug bluefs': '1/20', 'debug bluestore': '1/20', 'debug rocksdb': '4/10', 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd failsafe full ratio': 0.95, 'osd objectstore': 'bluestore'}}, 'fs': 'xfs'}, 'cephadm': {'cephadm_binary_url': 'https://download.ceph.com/rpm-20.2.0/el9/noarch/cephadm', 'containers': {'image': 'harbor.clyso.com/custom-ceph/ceph/ceph:sse-s3-kmip-preview-not-for-production-1'}}, 'install': {'ceph': {'flavor': 'default', 'sha1': '0597158282e6d69429e60df2354a6c8eed0e5bce'}, 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}, 'repos': [{'name': 'ceph-source', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/SRPMS'}, {'name': 'ceph-noarch', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/noarch'}, {'name': 'ceph', 'priority': 1, 'url': 'https://s3.clyso.com/ces-packages/components/ceph-debug/rpm-20.2.0-8-g0597158282e/el9.clyso/x86_64'}]}, 'rgw': {'frontend': 'beast', 'storage classes': {'FROZEN': None, 'LUKEWARM': None}}, 's3tests': {'sha1': 'e0c4ff71baef6d5126a0201df5fe54196d89b296'}, 'selinux': {'allowlist': ['scontext=system_u:system_r:getty_t:s0']}, 'thrashosds': {'bdev_inject_crash': 2, 'bdev_inject_crash_probability': 0.5}, 'workunit': {'branch': 'tt-20.2.0-sse-s3-kmip-preview-not-for-production-1', 'sha1': '99e8bef8f767b591604d6078b7861a00c2936d53'}}, 'owner': 'supriti', 'priority': 1000, 'repo': 'https://github.com/ceph/ceph.git', 'roles': [['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0'], ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1'], ['client.2']], 'seed': 3517, 'sha1': '0597158282e6d69429e60df2354a6c8eed0e5bce', 'sleep_before_teardown': 0, 'suite': 'rgw', 'suite_branch': 'tt-20.2.0-sse-s3-kmip-preview-not-for-production-1', 'suite_path': '/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa', 'suite_relpath': 'qa', 'suite_repo': 'http://git.local/ceph.git', 'suite_sha1': '99e8bef8f767b591604d6078b7861a00c2936d53', 'targets': {'vm00.local': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMKaBYra0YQC0r5v8PJSnyELq+uBASa/JHP0hVqf/Gsj+uDFuUIdK2PWVf4v0w5+FvinmM7yTymEwx+d5MrPTjY=', 'vm03.local': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNz/Yckkc47y4b9KfD/sbYbvn8ajojOuiJI63PpsGF6L44sz0OnCf10skNklPBbSuXi8nEP566fiU6LHwxIwWU8=', 'vm07.local': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDbCrGZhGhUeI32QJ9fDw+VpIAPWsJvEnXsdQ9i3JuGSSIFA8JI84/0XrQ/jtdCEiLWrsi2zHpOSYEytCsN9Y7o='}, 'tasks': [{'internal.save_config': None}, {'internal.check_lock': None}, {'internal.add_remotes': None}, {'console_log': None}, {'internal.connect': None}, {'internal.push_inventory': None}, {'internal.serialize_remote_roles': None}, {'internal.check_conflict': None}, {'internal.check_ceph_data': None}, {'internal.vm_setup': None}, {'internal.base': None}, {'internal.archive_upload': None}, {'internal.archive': None}, {'internal.coredump': None}, {'internal.sudo': None}, {'internal.syslog': None}, {'internal.timer': None}, {'pcp': None}, {'selinux': None}, {'ansible.cephlab': None}, {'clock': None}, {'install': None}, {'ceph': None}, {'openssl_keys': None}, {'rgw': ['client.0', 'client.1', 'client.2']}, {'tox': ['client.0']}, {'tox': ['client.0']}, {'dedup-tests': {'client.0': {'rgw_server': 'client.0'}}}], 'teuthology': {'fragments_dropped': [], 'meta': {}, 'postmerge': []}, 'teuthology_branch': 'uv2', 'teuthology_repo': 'https://github.com/kshtsk/teuthology', 'teuthology_sha1': 'a59626679648f962bca99d20d35578f2998c8f37', 'timestamp': '2026-04-01_09:45:36', 'tube': 'vps', 'user': 'supriti', 'verbose': False, 'worker_log': '/home/teuthos/.teuthology/dispatcher/dispatcher.vps.282426'} 2026-04-01T09:52:37.867 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/ceph.data 2026-04-01T09:52:37.899 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/ceph.data 2026-04-01T09:52:37.905 DEBUG:teuthology.orchestra.run.vm07:> install -d -m0755 -- /home/ubuntu/cephtest/ceph.data 2026-04-01T09:52:37.924 DEBUG:teuthology.orchestra.run.vm00:> sudo install -d -m0777 -- /var/run/ceph 2026-04-01T09:52:37.957 DEBUG:teuthology.orchestra.run.vm03:> sudo install -d -m0777 -- /var/run/ceph 2026-04-01T09:52:37.966 DEBUG:teuthology.orchestra.run.vm07:> sudo install -d -m0777 -- /var/run/ceph 2026-04-01T09:52:37.997 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:52:38.002 DEBUG:teuthology.orchestra.run.vm00:> dd if=/scratch_devs of=/dev/stdout 2026-04-01T09:52:38.047 DEBUG:teuthology.misc:devs=['/dev/vg_nvme/lv_1', '/dev/vg_nvme/lv_2', '/dev/vg_nvme/lv_3', '/dev/vg_nvme/lv_4'] 2026-04-01T09:52:38.047 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vg_nvme/lv_1 2026-04-01T09:52:38.111 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vg_nvme/lv_1 -> ../dm-0 2026-04-01T09:52:38.111 INFO:teuthology.orchestra.run.vm00.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-04-01T09:52:38.111 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 1081 Links: 1 2026-04-01T09:52:38.111 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-04-01T09:52:38.111 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:device_t:s0 2026-04-01T09:52:38.111 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-04-01 09:52:34.922721655 +0000 2026-04-01T09:52:38.111 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-04-01 09:52:12.113706612 +0000 2026-04-01T09:52:38.111 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-04-01 09:52:12.113706612 +0000 2026-04-01T09:52:38.111 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-04-01 09:52:12.113706612 +0000 2026-04-01T09:52:38.111 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vg_nvme/lv_1 of=/dev/null count=1 2026-04-01T09:52:38.184 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-04-01T09:52:38.184 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-04-01T09:52:38.184 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.00015541 s, 3.3 MB/s 2026-04-01T09:52:38.185 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_1 2026-04-01T09:52:38.246 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vg_nvme/lv_2 2026-04-01T09:52:38.305 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vg_nvme/lv_2 -> ../dm-1 2026-04-01T09:52:38.305 INFO:teuthology.orchestra.run.vm00.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-04-01T09:52:38.305 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 1067 Links: 1 2026-04-01T09:52:38.305 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-04-01T09:52:38.305 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:device_t:s0 2026-04-01T09:52:38.305 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-04-01 09:52:34.922721655 +0000 2026-04-01T09:52:38.305 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-04-01 09:52:12.105706607 +0000 2026-04-01T09:52:38.305 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-04-01 09:52:12.105706607 +0000 2026-04-01T09:52:38.305 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-04-01 09:52:12.105706607 +0000 2026-04-01T09:52:38.306 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vg_nvme/lv_2 of=/dev/null count=1 2026-04-01T09:52:38.374 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-04-01T09:52:38.374 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-04-01T09:52:38.374 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000180789 s, 2.8 MB/s 2026-04-01T09:52:38.375 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_2 2026-04-01T09:52:38.435 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vg_nvme/lv_3 2026-04-01T09:52:38.494 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vg_nvme/lv_3 -> ../dm-2 2026-04-01T09:52:38.494 INFO:teuthology.orchestra.run.vm00.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-04-01T09:52:38.494 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 1075 Links: 1 2026-04-01T09:52:38.494 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-04-01T09:52:38.494 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:device_t:s0 2026-04-01T09:52:38.494 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-04-01 09:52:34.922721655 +0000 2026-04-01T09:52:38.494 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-04-01 09:52:12.111706610 +0000 2026-04-01T09:52:38.494 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-04-01 09:52:12.111706610 +0000 2026-04-01T09:52:38.494 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-04-01 09:52:12.111706610 +0000 2026-04-01T09:52:38.494 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vg_nvme/lv_3 of=/dev/null count=1 2026-04-01T09:52:38.568 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-04-01T09:52:38.569 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-04-01T09:52:38.569 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000168656 s, 3.0 MB/s 2026-04-01T09:52:38.570 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_3 2026-04-01T09:52:38.631 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vg_nvme/lv_4 2026-04-01T09:52:38.690 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vg_nvme/lv_4 -> ../dm-3 2026-04-01T09:52:38.690 INFO:teuthology.orchestra.run.vm00.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-04-01T09:52:38.690 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 1089 Links: 1 2026-04-01T09:52:38.690 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-04-01T09:52:38.690 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:device_t:s0 2026-04-01T09:52:38.690 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-04-01 09:52:34.923721656 +0000 2026-04-01T09:52:38.690 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-04-01 09:52:12.125706620 +0000 2026-04-01T09:52:38.690 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-04-01 09:52:12.125706620 +0000 2026-04-01T09:52:38.690 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-04-01 09:52:12.125706620 +0000 2026-04-01T09:52:38.691 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vg_nvme/lv_4 of=/dev/null count=1 2026-04-01T09:52:38.759 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-04-01T09:52:38.759 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-04-01T09:52:38.759 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000167193 s, 3.1 MB/s 2026-04-01T09:52:38.760 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_4 2026-04-01T09:52:38.823 INFO:tasks.ceph:osd dev map: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'} 2026-04-01T09:52:38.823 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:52:38.823 DEBUG:teuthology.orchestra.run.vm03:> dd if=/scratch_devs of=/dev/stdout 2026-04-01T09:52:38.845 DEBUG:teuthology.misc:devs=['/dev/vg_nvme/lv_1', '/dev/vg_nvme/lv_2', '/dev/vg_nvme/lv_3', '/dev/vg_nvme/lv_4'] 2026-04-01T09:52:38.845 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vg_nvme/lv_1 2026-04-01T09:52:38.908 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vg_nvme/lv_1 -> ../dm-0 2026-04-01T09:52:38.908 INFO:teuthology.orchestra.run.vm03.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-04-01T09:52:38.908 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 1049 Links: 1 2026-04-01T09:52:38.908 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-04-01T09:52:38.908 INFO:teuthology.orchestra.run.vm03.stdout:Context: system_u:object_r:device_t:s0 2026-04-01T09:52:38.908 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-04-01 09:52:31.965942754 +0000 2026-04-01T09:52:38.908 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-04-01 09:52:09.056934933 +0000 2026-04-01T09:52:38.908 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-04-01 09:52:09.056934933 +0000 2026-04-01T09:52:38.908 INFO:teuthology.orchestra.run.vm03.stdout: Birth: 2026-04-01 09:52:09.056934933 +0000 2026-04-01T09:52:38.909 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vg_nvme/lv_1 of=/dev/null count=1 2026-04-01T09:52:38.975 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-04-01T09:52:38.975 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-04-01T09:52:38.975 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000178234 s, 2.9 MB/s 2026-04-01T09:52:38.976 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_1 2026-04-01T09:52:39.036 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vg_nvme/lv_2 2026-04-01T09:52:39.096 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vg_nvme/lv_2 -> ../dm-1 2026-04-01T09:52:39.096 INFO:teuthology.orchestra.run.vm03.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-04-01T09:52:39.096 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 1059 Links: 1 2026-04-01T09:52:39.096 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-04-01T09:52:39.097 INFO:teuthology.orchestra.run.vm03.stdout:Context: system_u:object_r:device_t:s0 2026-04-01T09:52:39.097 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-04-01 09:52:31.965942754 +0000 2026-04-01T09:52:39.097 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-04-01 09:52:09.058934933 +0000 2026-04-01T09:52:39.097 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-04-01 09:52:09.058934933 +0000 2026-04-01T09:52:39.097 INFO:teuthology.orchestra.run.vm03.stdout: Birth: 2026-04-01 09:52:09.058934933 +0000 2026-04-01T09:52:39.097 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vg_nvme/lv_2 of=/dev/null count=1 2026-04-01T09:52:39.164 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-04-01T09:52:39.164 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-04-01T09:52:39.164 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000176971 s, 2.9 MB/s 2026-04-01T09:52:39.165 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_2 2026-04-01T09:52:39.225 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vg_nvme/lv_3 2026-04-01T09:52:39.281 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vg_nvme/lv_3 -> ../dm-2 2026-04-01T09:52:39.281 INFO:teuthology.orchestra.run.vm03.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-04-01T09:52:39.281 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 1069 Links: 1 2026-04-01T09:52:39.281 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-04-01T09:52:39.281 INFO:teuthology.orchestra.run.vm03.stdout:Context: system_u:object_r:device_t:s0 2026-04-01T09:52:39.281 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-04-01 09:52:31.965942754 +0000 2026-04-01T09:52:39.281 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-04-01 09:52:09.062934935 +0000 2026-04-01T09:52:39.281 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-04-01 09:52:09.062934935 +0000 2026-04-01T09:52:39.281 INFO:teuthology.orchestra.run.vm03.stdout: Birth: 2026-04-01 09:52:09.062934935 +0000 2026-04-01T09:52:39.281 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vg_nvme/lv_3 of=/dev/null count=1 2026-04-01T09:52:39.349 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-04-01T09:52:39.349 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-04-01T09:52:39.349 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000152816 s, 3.4 MB/s 2026-04-01T09:52:39.350 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_3 2026-04-01T09:52:39.413 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vg_nvme/lv_4 2026-04-01T09:52:39.475 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vg_nvme/lv_4 -> ../dm-3 2026-04-01T09:52:39.475 INFO:teuthology.orchestra.run.vm03.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-04-01T09:52:39.475 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 1061 Links: 1 2026-04-01T09:52:39.475 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-04-01T09:52:39.476 INFO:teuthology.orchestra.run.vm03.stdout:Context: system_u:object_r:device_t:s0 2026-04-01T09:52:39.476 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-04-01 09:52:31.966942754 +0000 2026-04-01T09:52:39.476 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-04-01 09:52:09.059934934 +0000 2026-04-01T09:52:39.476 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-04-01 09:52:09.059934934 +0000 2026-04-01T09:52:39.476 INFO:teuthology.orchestra.run.vm03.stdout: Birth: 2026-04-01 09:52:09.059934934 +0000 2026-04-01T09:52:39.476 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vg_nvme/lv_4 of=/dev/null count=1 2026-04-01T09:52:39.547 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-04-01T09:52:39.547 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-04-01T09:52:39.547 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000172073 s, 3.0 MB/s 2026-04-01T09:52:39.548 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_4 2026-04-01T09:52:39.608 INFO:tasks.ceph:osd dev map: {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'} 2026-04-01T09:52:39.608 INFO:tasks.ceph:remote_to_roles_to_devs: {Remote(name='ubuntu@vm00.local'): {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'}, Remote(name='ubuntu@vm03.local'): {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'}} 2026-04-01T09:52:39.608 INFO:tasks.ceph:Generating config... 2026-04-01T09:52:39.608 INFO:tasks.ceph:[client] debug rgw = 20 2026-04-01T09:52:39.609 INFO:tasks.ceph:[client] debug rgw dedup = 20 2026-04-01T09:52:39.609 INFO:tasks.ceph:[client] setgroup = ceph 2026-04-01T09:52:39.609 INFO:tasks.ceph:[client] setuser = ceph 2026-04-01T09:52:39.609 INFO:tasks.ceph:[global] osd_max_pg_log_entries = 10 2026-04-01T09:52:39.609 INFO:tasks.ceph:[global] osd_min_pg_log_entries = 10 2026-04-01T09:52:39.609 INFO:tasks.ceph:[mgr] debug mgr = 20 2026-04-01T09:52:39.609 INFO:tasks.ceph:[mgr] debug ms = 1 2026-04-01T09:52:39.609 INFO:tasks.ceph:[mon] debug mon = 20 2026-04-01T09:52:39.609 INFO:tasks.ceph:[mon] debug ms = 1 2026-04-01T09:52:39.609 INFO:tasks.ceph:[mon] debug paxos = 20 2026-04-01T09:52:39.609 INFO:tasks.ceph:[osd] bdev async discard = True 2026-04-01T09:52:39.609 INFO:tasks.ceph:[osd] bdev enable discard = True 2026-04-01T09:52:39.609 INFO:tasks.ceph:[osd] bluestore allocator = bitmap 2026-04-01T09:52:39.609 INFO:tasks.ceph:[osd] bluestore block size = 96636764160 2026-04-01T09:52:39.609 INFO:tasks.ceph:[osd] bluestore fsck on mount = True 2026-04-01T09:52:39.609 INFO:tasks.ceph:[osd] debug bluefs = 1/20 2026-04-01T09:52:39.609 INFO:tasks.ceph:[osd] debug bluestore = 1/20 2026-04-01T09:52:39.609 INFO:tasks.ceph:[osd] debug ms = 1 2026-04-01T09:52:39.609 INFO:tasks.ceph:[osd] debug osd = 20 2026-04-01T09:52:39.609 INFO:tasks.ceph:[osd] debug rocksdb = 4/10 2026-04-01T09:52:39.609 INFO:tasks.ceph:[osd] mon osd backfillfull_ratio = 0.85 2026-04-01T09:52:39.609 INFO:tasks.ceph:[osd] mon osd full ratio = 0.9 2026-04-01T09:52:39.609 INFO:tasks.ceph:[osd] mon osd nearfull ratio = 0.8 2026-04-01T09:52:39.609 INFO:tasks.ceph:[osd] osd failsafe full ratio = 0.95 2026-04-01T09:52:39.609 INFO:tasks.ceph:[osd] osd mclock iops capacity threshold hdd = 49000 2026-04-01T09:52:39.609 INFO:tasks.ceph:[osd] osd objectstore = bluestore 2026-04-01T09:52:39.609 INFO:tasks.ceph:[osd] osd shutdown pgref assert = True 2026-04-01T09:52:39.609 INFO:tasks.ceph:Setting up mon.a... 2026-04-01T09:52:39.609 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring /etc/ceph/ceph.keyring 2026-04-01T09:52:39.650 INFO:teuthology.orchestra.run.vm00.stdout:creating /etc/ceph/ceph.keyring 2026-04-01T09:52:39.653 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --gen-key --name=mon. /etc/ceph/ceph.keyring 2026-04-01T09:52:39.739 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-04-01T09:52:39.767 DEBUG:tasks.ceph:Ceph mon addresses: [('mon.a', '192.168.123.100'), ('mon.c', '[v2:192.168.123.100:3301,v1:192.168.123.100:6790]'), ('mon.b', '192.168.123.103')] 2026-04-01T09:52:39.767 DEBUG:tasks.ceph:writing out conf {'global': {'chdir': '', 'pid file': '/var/run/ceph/$cluster-$name.pid', 'auth supported': 'cephx', 'filestore xattr use omap': 'true', 'mon clock drift allowed': '1.000', 'osd crush chooseleaf type': '0', 'auth debug': 'true', 'ms die on old message': 'true', 'ms die on bug': 'true', 'mon max pg per osd': '10000', 'mon pg warn max object skew': '0', 'osd_pool_default_pg_autoscale_mode': 'off', 'osd pool default size': '2', 'mon osd allow primary affinity': 'true', 'mon osd allow pg remap': 'true', 'mon warn on legacy crush tunables': 'false', 'mon warn on crush straw calc version zero': 'false', 'mon warn on no sortbitwise': 'false', 'mon warn on osd down out interval zero': 'false', 'mon warn on too few osds': 'false', 'mon_warn_on_pool_pg_num_not_power_of_two': 'false', 'mon_warn_on_pool_no_redundancy': 'false', 'mon_allow_pool_size_one': 'true', 'osd pool default erasure code profile': 'plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd', 'osd default data pool replay window': '5', 'mon allow pool delete': 'true', 'mon cluster log file level': 'debug', 'debug asserts on shutdown': 'true', 'mon health detail to clog': 'false', 'mon host': '192.168.123.100,[v2:192.168.123.100:3301,v1:192.168.123.100:6790],192.168.123.103', 'osd_max_pg_log_entries': 10, 'osd_min_pg_log_entries': 10}, 'osd': {'osd journal size': '100', 'osd scrub load threshold': '5.0', 'osd scrub max interval': '600', 'osd mclock profile': 'high_recovery_ops', 'osd recover clone overlap': 'true', 'osd recovery max chunk': '1048576', 'osd debug shutdown': 'true', 'osd debug op order': 'true', 'osd debug verify stray on activate': 'true', 'osd debug trim objects': 'true', 'osd open classes on start': 'true', 'osd debug pg log writeout': 'true', 'osd deep scrub update digest min age': '30', 'osd map max advance': '10', 'journal zero on create': 'true', 'filestore ondisk finisher threads': '3', 'filestore apply finisher threads': '3', 'bdev debug aio': 'true', 'osd debug misdirected ops': 'true', 'bdev async discard': True, 'bdev enable discard': True, 'bluestore allocator': 'bitmap', 'bluestore block size': 96636764160, 'bluestore fsck on mount': True, 'debug bluefs': '1/20', 'debug bluestore': '1/20', 'debug ms': 1, 'debug osd': 20, 'debug rocksdb': '4/10', 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd failsafe full ratio': 0.95, 'osd mclock iops capacity threshold hdd': 49000, 'osd objectstore': 'bluestore', 'osd shutdown pgref assert': True}, 'mgr': {'debug ms': 1, 'debug mgr': 20, 'debug mon': '20', 'debug auth': '20', 'mon reweight min pgs per osd': '4', 'mon reweight min bytes per osd': '10', 'mgr/telemetry/nag': 'false'}, 'mon': {'debug ms': 1, 'debug mon': 20, 'debug paxos': 20, 'debug auth': '20', 'mon data avail warn': '5', 'mon mgr mkfs grace': '240', 'mon reweight min pgs per osd': '4', 'mon osd reporter subtree level': 'osd', 'mon osd prime pg temp': 'true', 'mon reweight min bytes per osd': '10', 'auth mon ticket ttl': '660', 'auth service ticket ttl': '240', 'mon_warn_on_insecure_global_id_reclaim': 'false', 'mon_warn_on_insecure_global_id_reclaim_allowed': 'false', 'mon_down_mkfs_grace': '2m', 'mon_warn_on_filestore_osds': 'false'}, 'client': {'rgw cache enabled': 'true', 'rgw enable ops log': 'true', 'rgw enable usage log': 'true', 'log file': '/var/log/ceph/$cluster-$name.$pid.log', 'admin socket': '/var/run/ceph/$cluster-$name.$pid.asok', 'debug rgw': 20, 'debug rgw dedup': 20, 'setgroup': 'ceph', 'setuser': 'ceph'}, 'mon.a': {}, 'mon.c': {}, 'mon.b': {}} 2026-04-01T09:52:39.767 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:52:39.767 DEBUG:teuthology.orchestra.run.vm00:> dd of=/home/ubuntu/cephtest/ceph.tmp.conf 2026-04-01T09:52:39.823 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage monmaptool -c /home/ubuntu/cephtest/ceph.tmp.conf --create --clobber --enable-all-features --add a 192.168.123.100 --addv c '[v2:192.168.123.100:3301,v1:192.168.123.100:6790]' --add b 192.168.123.103 --print /home/ubuntu/cephtest/ceph.monmap 2026-04-01T09:52:39.903 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-04-01T09:52:39.903 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-04-01T09:52:39.903 INFO:teuthology.orchestra.run.vm00.stdout:monmaptool: monmap file /home/ubuntu/cephtest/ceph.monmap 2026-04-01T09:52:39.903 INFO:teuthology.orchestra.run.vm00.stdout:monmaptool: generated fsid 9d6defe0-49c7-414d-80c7-4853a2cdd635 2026-04-01T09:52:39.903 INFO:teuthology.orchestra.run.vm00.stdout:setting min_mon_release = tentacle 2026-04-01T09:52:39.903 INFO:teuthology.orchestra.run.vm00.stdout:epoch 0 2026-04-01T09:52:39.903 INFO:teuthology.orchestra.run.vm00.stdout:fsid 9d6defe0-49c7-414d-80c7-4853a2cdd635 2026-04-01T09:52:39.903 INFO:teuthology.orchestra.run.vm00.stdout:last_changed 2026-04-01T09:52:39.903318+0000 2026-04-01T09:52:39.903 INFO:teuthology.orchestra.run.vm00.stdout:created 2026-04-01T09:52:39.903318+0000 2026-04-01T09:52:39.903 INFO:teuthology.orchestra.run.vm00.stdout:min_mon_release 20 (tentacle) 2026-04-01T09:52:39.903 INFO:teuthology.orchestra.run.vm00.stdout:election_strategy: 1 2026-04-01T09:52:39.903 INFO:teuthology.orchestra.run.vm00.stdout:0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-04-01T09:52:39.903 INFO:teuthology.orchestra.run.vm00.stdout:1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-04-01T09:52:39.903 INFO:teuthology.orchestra.run.vm00.stdout:2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-04-01T09:52:39.903 INFO:teuthology.orchestra.run.vm00.stdout:monmaptool: writing epoch 0 to /home/ubuntu/cephtest/ceph.monmap (3 monitors) 2026-04-01T09:52:39.905 DEBUG:teuthology.orchestra.run.vm00:> rm -- /home/ubuntu/cephtest/ceph.tmp.conf 2026-04-01T09:52:39.961 INFO:tasks.ceph:Writing /etc/ceph/ceph.conf for FSID 9d6defe0-49c7-414d-80c7-4853a2cdd635... 2026-04-01T09:52:39.962 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /etc/ceph && sudo chmod 0755 /etc/ceph && sudo tee /etc/ceph/ceph.conf && sudo chmod 0644 /etc/ceph/ceph.conf > /dev/null 2026-04-01T09:52:40.004 DEBUG:teuthology.orchestra.run.vm03:> sudo mkdir -p /etc/ceph && sudo chmod 0755 /etc/ceph && sudo tee /etc/ceph/ceph.conf && sudo chmod 0644 /etc/ceph/ceph.conf > /dev/null 2026-04-01T09:52:40.005 DEBUG:teuthology.orchestra.run.vm07:> sudo mkdir -p /etc/ceph && sudo chmod 0755 /etc/ceph && sudo tee /etc/ceph/ceph.conf && sudo chmod 0644 /etc/ceph/ceph.conf > /dev/null 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout:[global] 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: chdir = "" 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: pid file = /var/run/ceph/$cluster-$name.pid 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: auth supported = cephx 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: filestore xattr use omap = true 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: mon clock drift allowed = 1.000 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: osd crush chooseleaf type = 0 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: auth debug = true 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: ms die on old message = true 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: ms die on bug = true 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: mon max pg per osd = 10000 # >= luminous 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: mon pg warn max object skew = 0 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: # disable pg_autoscaler by default for new pools 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: osd_pool_default_pg_autoscale_mode = off 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: osd pool default size = 2 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.051 INFO:teuthology.orchestra.run.vm07.stdout: mon osd allow primary affinity = true 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: mon osd allow pg remap = true 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: mon warn on legacy crush tunables = false 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: mon warn on crush straw calc version zero = false 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: mon warn on no sortbitwise = false 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: mon warn on osd down out interval zero = false 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: mon warn on too few osds = false 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: mon_warn_on_pool_pg_num_not_power_of_two = false 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: mon_warn_on_pool_no_redundancy = false 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: mon_allow_pool_size_one = true 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: osd pool default erasure code profile = plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: osd default data pool replay window = 5 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: mon allow pool delete = true 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: mon cluster log file level = debug 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: debug asserts on shutdown = true 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: mon health detail to clog = false 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: mon host = "192.168.123.100,[v2:192.168.123.100:3301,v1:192.168.123.100:6790],192.168.123.103" 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: osd_max_pg_log_entries = 10 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: osd_min_pg_log_entries = 10 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: fsid = 9d6defe0-49c7-414d-80c7-4853a2cdd635 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout:[osd] 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: osd journal size = 100 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: osd scrub load threshold = 5.0 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: osd scrub max interval = 600 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: osd mclock profile = high_recovery_ops 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: osd recover clone overlap = true 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: osd recovery max chunk = 1048576 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: osd debug shutdown = true 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: osd debug op order = true 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: osd debug verify stray on activate = true 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: osd debug trim objects = true 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: osd open classes on start = true 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: osd debug pg log writeout = true 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: osd deep scrub update digest min age = 30 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: osd map max advance = 10 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: journal zero on create = true 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: filestore ondisk finisher threads = 3 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: filestore apply finisher threads = 3 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: bdev debug aio = true 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: osd debug misdirected ops = true 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: bdev async discard = True 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: bdev enable discard = True 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: bluestore allocator = bitmap 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: bluestore block size = 96636764160 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: bluestore fsck on mount = True 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: debug bluefs = 1/20 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: debug bluestore = 1/20 2026-04-01T09:52:40.052 INFO:teuthology.orchestra.run.vm07.stdout: debug ms = 1 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: debug osd = 20 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: debug rocksdb = 4/10 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: mon osd backfillfull_ratio = 0.85 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: mon osd full ratio = 0.9 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: mon osd nearfull ratio = 0.8 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: osd failsafe full ratio = 0.95 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: osd mclock iops capacity threshold hdd = 49000 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: osd objectstore = bluestore 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: osd shutdown pgref assert = True 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout:[mgr] 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: debug ms = 1 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: debug mgr = 20 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: debug mon = 20 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: debug auth = 20 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: mon reweight min pgs per osd = 4 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: mon reweight min bytes per osd = 10 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: mgr/telemetry/nag = false 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout:[mon] 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: debug ms = 1 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: debug mon = 20 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: debug paxos = 20 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: debug auth = 20 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: mon data avail warn = 5 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: mon mgr mkfs grace = 240 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: mon reweight min pgs per osd = 4 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: mon osd reporter subtree level = osd 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: mon osd prime pg temp = true 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: mon reweight min bytes per osd = 10 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: # rotate auth tickets quickly to exercise renewal paths 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: auth mon ticket ttl = 660 # 11m 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: auth service ticket ttl = 240 # 4m 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: # don't complain about insecure global_id in the test suite 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: mon_warn_on_insecure_global_id_reclaim = false 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: mon_warn_on_insecure_global_id_reclaim_allowed = false 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: # 1m isn't quite enough 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: mon_down_mkfs_grace = 2m 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: mon_warn_on_filestore_osds = false 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout:[client] 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: rgw cache enabled = true 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: rgw enable ops log = true 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: rgw enable usage log = true 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: log file = /var/log/ceph/$cluster-$name.$pid.log 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: admin socket = /var/run/ceph/$cluster-$name.$pid.asok 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: debug rgw = 20 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: debug rgw dedup = 20 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: setgroup = ceph 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout: setuser = ceph 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout:[mon.a] 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout:[mon.c] 2026-04-01T09:52:40.053 INFO:teuthology.orchestra.run.vm07.stdout:[mon.b] 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout:[global] 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: chdir = "" 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: pid file = /var/run/ceph/$cluster-$name.pid 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: auth supported = cephx 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: filestore xattr use omap = true 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: mon clock drift allowed = 1.000 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: osd crush chooseleaf type = 0 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: auth debug = true 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: ms die on old message = true 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: ms die on bug = true 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: mon max pg per osd = 10000 # >= luminous 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: mon pg warn max object skew = 0 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: # disable pg_autoscaler by default for new pools 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: osd_pool_default_pg_autoscale_mode = off 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: osd pool default size = 2 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: mon osd allow primary affinity = true 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: mon osd allow pg remap = true 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: mon warn on legacy crush tunables = false 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: mon warn on crush straw calc version zero = false 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: mon warn on no sortbitwise = false 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: mon warn on osd down out interval zero = false 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: mon warn on too few osds = false 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: mon_warn_on_pool_pg_num_not_power_of_two = false 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: mon_warn_on_pool_no_redundancy = false 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: mon_allow_pool_size_one = true 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: osd pool default erasure code profile = plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: osd default data pool replay window = 5 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: mon allow pool delete = true 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: mon cluster log file level = debug 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: debug asserts on shutdown = true 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: mon health detail to clog = false 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: mon host = "192.168.123.100,[v2:192.168.123.100:3301,v1:192.168.123.100:6790],192.168.123.103" 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: osd_max_pg_log_entries = 10 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: osd_min_pg_log_entries = 10 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: fsid = 9d6defe0-49c7-414d-80c7-4853a2cdd635 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout:[osd] 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: osd journal size = 100 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.054 INFO:teuthology.orchestra.run.vm00.stdout: osd scrub load threshold = 5.0 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: osd scrub max interval = 600 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: osd mclock profile = high_recovery_ops 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: osd recover clone overlap = true 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: osd recovery max chunk = 1048576 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: osd debug shutdown = true 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: osd debug op order = true 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: osd debug verify stray on activate = true 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: osd debug trim objects = true 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: osd open classes on start = true 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: osd debug pg log writeout = true 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: osd deep scrub update digest min age = 30 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: osd map max advance = 10 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: journal zero on create = true 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: filestore ondisk finisher threads = 3 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: filestore apply finisher threads = 3 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: bdev debug aio = true 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: osd debug misdirected ops = true 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: bdev async discard = True 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: bdev enable discard = True 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: bluestore allocator = bitmap 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: bluestore block size = 96636764160 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: bluestore fsck on mount = True 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: debug bluefs = 1/20 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: debug bluestore = 1/20 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: debug ms = 1 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: debug osd = 20 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: debug rocksdb = 4/10 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: mon osd backfillfull_ratio = 0.85 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: mon osd full ratio = 0.9 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: mon osd nearfull ratio = 0.8 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: osd failsafe full ratio = 0.95 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: osd mclock iops capacity threshold hdd = 49000 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: osd objectstore = bluestore 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: osd shutdown pgref assert = True 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout:[mgr] 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: debug ms = 1 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: debug mgr = 20 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: debug mon = 20 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: debug auth = 20 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: mon reweight min pgs per osd = 4 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: mon reweight min bytes per osd = 10 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: mgr/telemetry/nag = false 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout:[mon] 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: debug ms = 1 2026-04-01T09:52:40.055 INFO:teuthology.orchestra.run.vm00.stdout: debug mon = 20 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: debug paxos = 20 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: debug auth = 20 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: mon data avail warn = 5 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: mon mgr mkfs grace = 240 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: mon reweight min pgs per osd = 4 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: mon osd reporter subtree level = osd 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: mon osd prime pg temp = true 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: mon reweight min bytes per osd = 10 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: # rotate auth tickets quickly to exercise renewal paths 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: auth mon ticket ttl = 660 # 11m 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: auth service ticket ttl = 240 # 4m 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: # don't complain about insecure global_id in the test suite 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: mon_warn_on_insecure_global_id_reclaim = false 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: mon_warn_on_insecure_global_id_reclaim_allowed = false 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: # 1m isn't quite enough 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: mon_down_mkfs_grace = 2m 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: mon_warn_on_filestore_osds = false 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout:[client] 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: rgw cache enabled = true 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: rgw enable ops log = true 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: rgw enable usage log = true 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: log file = /var/log/ceph/$cluster-$name.$pid.log 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: admin socket = /var/run/ceph/$cluster-$name.$pid.asok 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: debug rgw = 20 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: debug rgw dedup = 20 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: setgroup = ceph 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout: setuser = ceph 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout:[mon.a] 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout:[mon.c] 2026-04-01T09:52:40.056 INFO:teuthology.orchestra.run.vm00.stdout:[mon.b] 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout:[global] 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout: chdir = "" 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout: pid file = /var/run/ceph/$cluster-$name.pid 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout: auth supported = cephx 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout: filestore xattr use omap = true 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout: mon clock drift allowed = 1.000 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout: osd crush chooseleaf type = 0 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout: auth debug = true 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout: ms die on old message = true 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout: ms die on bug = true 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout: mon max pg per osd = 10000 # >= luminous 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout: mon pg warn max object skew = 0 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout: # disable pg_autoscaler by default for new pools 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout: osd_pool_default_pg_autoscale_mode = off 2026-04-01T09:52:40.057 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: osd pool default size = 2 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: mon osd allow primary affinity = true 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: mon osd allow pg remap = true 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: mon warn on legacy crush tunables = false 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: mon warn on crush straw calc version zero = false 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: mon warn on no sortbitwise = false 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: mon warn on osd down out interval zero = false 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: mon warn on too few osds = false 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: mon_warn_on_pool_pg_num_not_power_of_two = false 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: mon_warn_on_pool_no_redundancy = false 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: mon_allow_pool_size_one = true 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: osd pool default erasure code profile = plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: osd default data pool replay window = 5 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: mon allow pool delete = true 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: mon cluster log file level = debug 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: debug asserts on shutdown = true 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: mon health detail to clog = false 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: mon host = "192.168.123.100,[v2:192.168.123.100:3301,v1:192.168.123.100:6790],192.168.123.103" 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: osd_max_pg_log_entries = 10 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: osd_min_pg_log_entries = 10 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: fsid = 9d6defe0-49c7-414d-80c7-4853a2cdd635 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout:[osd] 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: osd journal size = 100 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: osd scrub load threshold = 5.0 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: osd scrub max interval = 600 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: osd mclock profile = high_recovery_ops 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: osd recover clone overlap = true 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: osd recovery max chunk = 1048576 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: osd debug shutdown = true 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: osd debug op order = true 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: osd debug verify stray on activate = true 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: osd debug trim objects = true 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.058 INFO:teuthology.orchestra.run.vm03.stdout: osd open classes on start = true 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: osd debug pg log writeout = true 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: osd deep scrub update digest min age = 30 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: osd map max advance = 10 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: journal zero on create = true 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: filestore ondisk finisher threads = 3 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: filestore apply finisher threads = 3 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: bdev debug aio = true 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: osd debug misdirected ops = true 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: bdev async discard = True 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: bdev enable discard = True 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: bluestore allocator = bitmap 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: bluestore block size = 96636764160 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: bluestore fsck on mount = True 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: debug bluefs = 1/20 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: debug bluestore = 1/20 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: debug ms = 1 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: debug osd = 20 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: debug rocksdb = 4/10 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: mon osd backfillfull_ratio = 0.85 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: mon osd full ratio = 0.9 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: mon osd nearfull ratio = 0.8 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: osd failsafe full ratio = 0.95 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: osd mclock iops capacity threshold hdd = 49000 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: osd objectstore = bluestore 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: osd shutdown pgref assert = True 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout:[mgr] 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: debug ms = 1 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: debug mgr = 20 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: debug mon = 20 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: debug auth = 20 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: mon reweight min pgs per osd = 4 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: mon reweight min bytes per osd = 10 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: mgr/telemetry/nag = false 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout:[mon] 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: debug ms = 1 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: debug mon = 20 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: debug paxos = 20 2026-04-01T09:52:40.059 INFO:teuthology.orchestra.run.vm03.stdout: debug auth = 20 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: mon data avail warn = 5 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: mon mgr mkfs grace = 240 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: mon reweight min pgs per osd = 4 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: mon osd reporter subtree level = osd 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: mon osd prime pg temp = true 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: mon reweight min bytes per osd = 10 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: # rotate auth tickets quickly to exercise renewal paths 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: auth mon ticket ttl = 660 # 11m 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: auth service ticket ttl = 240 # 4m 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: # don't complain about insecure global_id in the test suite 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: mon_warn_on_insecure_global_id_reclaim = false 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: mon_warn_on_insecure_global_id_reclaim_allowed = false 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: # 1m isn't quite enough 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: mon_down_mkfs_grace = 2m 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: mon_warn_on_filestore_osds = false 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout:[client] 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: rgw cache enabled = true 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: rgw enable ops log = true 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: rgw enable usage log = true 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: log file = /var/log/ceph/$cluster-$name.$pid.log 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: admin socket = /var/run/ceph/$cluster-$name.$pid.asok 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: debug rgw = 20 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: debug rgw dedup = 20 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: setgroup = ceph 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout: setuser = ceph 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout:[mon.a] 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout:[mon.c] 2026-04-01T09:52:40.060 INFO:teuthology.orchestra.run.vm03.stdout:[mon.b] 2026-04-01T09:52:40.068 INFO:tasks.ceph:Creating admin key on mon.a... 2026-04-01T09:52:40.068 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /etc/ceph/ceph.keyring 2026-04-01T09:52:40.122 INFO:tasks.ceph:Copying monmap to all nodes... 2026-04-01T09:52:40.122 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:52:40.122 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.keyring of=/dev/stdout 2026-04-01T09:52:40.141 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:52:40.141 DEBUG:teuthology.orchestra.run.vm00:> dd if=/home/ubuntu/cephtest/ceph.monmap of=/dev/stdout 2026-04-01T09:52:40.200 INFO:tasks.ceph:Sending monmap to node ubuntu@vm00.local 2026-04-01T09:52:40.200 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:52:40.200 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.keyring 2026-04-01T09:52:40.200 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-04-01T09:52:40.275 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:52:40.275 DEBUG:teuthology.orchestra.run.vm00:> dd of=/home/ubuntu/cephtest/ceph.monmap 2026-04-01T09:52:40.334 INFO:tasks.ceph:Sending monmap to node ubuntu@vm03.local 2026-04-01T09:52:40.334 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:52:40.334 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.keyring 2026-04-01T09:52:40.334 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-04-01T09:52:40.373 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:52:40.373 DEBUG:teuthology.orchestra.run.vm03:> dd of=/home/ubuntu/cephtest/ceph.monmap 2026-04-01T09:52:40.432 INFO:tasks.ceph:Sending monmap to node ubuntu@vm07.local 2026-04-01T09:52:40.432 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-04-01T09:52:40.432 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.keyring 2026-04-01T09:52:40.432 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-04-01T09:52:40.465 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-04-01T09:52:40.466 DEBUG:teuthology.orchestra.run.vm07:> dd of=/home/ubuntu/cephtest/ceph.monmap 2026-04-01T09:52:40.524 INFO:tasks.ceph:Setting up mon nodes... 2026-04-01T09:52:40.524 INFO:tasks.ceph:Setting up mgr nodes... 2026-04-01T09:52:40.524 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/mgr/ceph-y && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=mgr.y /var/lib/ceph/mgr/ceph-y/keyring 2026-04-01T09:52:40.575 INFO:teuthology.orchestra.run.vm00.stdout:creating /var/lib/ceph/mgr/ceph-y/keyring 2026-04-01T09:52:40.578 DEBUG:teuthology.orchestra.run.vm03:> sudo mkdir -p /var/lib/ceph/mgr/ceph-x && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=mgr.x /var/lib/ceph/mgr/ceph-x/keyring 2026-04-01T09:52:40.626 INFO:teuthology.orchestra.run.vm03.stdout:creating /var/lib/ceph/mgr/ceph-x/keyring 2026-04-01T09:52:40.629 INFO:tasks.ceph:Setting up mds nodes... 2026-04-01T09:52:40.629 INFO:tasks.ceph_client:Setting up client nodes... 2026-04-01T09:52:40.630 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=client.0 /etc/ceph/ceph.client.0.keyring && sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-04-01T09:52:40.672 INFO:teuthology.orchestra.run.vm00.stdout:creating /etc/ceph/ceph.client.0.keyring 2026-04-01T09:52:40.685 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=client.1 /etc/ceph/ceph.client.1.keyring && sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-04-01T09:52:40.727 INFO:teuthology.orchestra.run.vm03.stdout:creating /etc/ceph/ceph.client.1.keyring 2026-04-01T09:52:40.741 DEBUG:teuthology.orchestra.run.vm07:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=client.2 /etc/ceph/ceph.client.2.keyring && sudo chmod 0644 /etc/ceph/ceph.client.2.keyring 2026-04-01T09:52:40.782 INFO:teuthology.orchestra.run.vm07.stdout:creating /etc/ceph/ceph.client.2.keyring 2026-04-01T09:52:40.794 INFO:tasks.ceph:Running mkfs on osd nodes... 2026-04-01T09:52:40.794 INFO:tasks.ceph:ctx.disk_config.remote_to_roles_to_dev: {Remote(name='ubuntu@vm00.local'): {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'}, Remote(name='ubuntu@vm03.local'): {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'}} 2026-04-01T09:52:40.794 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/osd/ceph-0 2026-04-01T09:52:40.822 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'} 2026-04-01T09:52:40.823 INFO:tasks.ceph:role: osd.0 2026-04-01T09:52:40.823 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_1 on ubuntu@vm00.local 2026-04-01T09:52:40.823 DEBUG:teuthology.orchestra.run.vm00:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_1 2026-04-01T09:52:40.889 INFO:teuthology.orchestra.run.vm00.stdout:meta-data=/dev/vg_nvme/lv_1 isize=2048 agcount=4, agsize=1310464 blks 2026-04-01T09:52:40.889 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 attr=2, projid32bit=1 2026-04-01T09:52:40.889 INFO:teuthology.orchestra.run.vm00.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-04-01T09:52:40.889 INFO:teuthology.orchestra.run.vm00.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-04-01T09:52:40.889 INFO:teuthology.orchestra.run.vm00.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-04-01T09:52:40.889 INFO:teuthology.orchestra.run.vm00.stdout: = sunit=0 swidth=0 blks 2026-04-01T09:52:40.889 INFO:teuthology.orchestra.run.vm00.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-04-01T09:52:40.889 INFO:teuthology.orchestra.run.vm00.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-04-01T09:52:40.889 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-04-01T09:52:40.889 INFO:teuthology.orchestra.run.vm00.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-04-01T09:52:40.893 INFO:teuthology.orchestra.run.vm00.stdout:Discarding blocks...Done. 2026-04-01T09:52:40.900 INFO:tasks.ceph:mount /dev/vg_nvme/lv_1 on ubuntu@vm00.local -o noatime 2026-04-01T09:52:40.900 DEBUG:teuthology.orchestra.run.vm00:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_1 /var/lib/ceph/osd/ceph-0 2026-04-01T09:52:40.980 DEBUG:teuthology.orchestra.run.vm00:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-0 2026-04-01T09:52:41.055 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/osd/ceph-1 2026-04-01T09:52:41.122 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'} 2026-04-01T09:52:41.122 INFO:tasks.ceph:role: osd.1 2026-04-01T09:52:41.122 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_2 on ubuntu@vm00.local 2026-04-01T09:52:41.122 DEBUG:teuthology.orchestra.run.vm00:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_2 2026-04-01T09:52:41.193 INFO:teuthology.orchestra.run.vm00.stdout:meta-data=/dev/vg_nvme/lv_2 isize=2048 agcount=4, agsize=1310464 blks 2026-04-01T09:52:41.193 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 attr=2, projid32bit=1 2026-04-01T09:52:41.193 INFO:teuthology.orchestra.run.vm00.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-04-01T09:52:41.193 INFO:teuthology.orchestra.run.vm00.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-04-01T09:52:41.193 INFO:teuthology.orchestra.run.vm00.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-04-01T09:52:41.193 INFO:teuthology.orchestra.run.vm00.stdout: = sunit=0 swidth=0 blks 2026-04-01T09:52:41.193 INFO:teuthology.orchestra.run.vm00.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-04-01T09:52:41.193 INFO:teuthology.orchestra.run.vm00.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-04-01T09:52:41.193 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-04-01T09:52:41.193 INFO:teuthology.orchestra.run.vm00.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-04-01T09:52:41.199 INFO:teuthology.orchestra.run.vm00.stdout:Discarding blocks...Done. 2026-04-01T09:52:41.202 INFO:tasks.ceph:mount /dev/vg_nvme/lv_2 on ubuntu@vm00.local -o noatime 2026-04-01T09:52:41.202 DEBUG:teuthology.orchestra.run.vm00:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_2 /var/lib/ceph/osd/ceph-1 2026-04-01T09:52:41.273 DEBUG:teuthology.orchestra.run.vm00:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-1 2026-04-01T09:52:41.342 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/osd/ceph-2 2026-04-01T09:52:41.411 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'} 2026-04-01T09:52:41.411 INFO:tasks.ceph:role: osd.2 2026-04-01T09:52:41.411 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_3 on ubuntu@vm00.local 2026-04-01T09:52:41.411 DEBUG:teuthology.orchestra.run.vm00:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_3 2026-04-01T09:52:41.481 INFO:teuthology.orchestra.run.vm00.stdout:meta-data=/dev/vg_nvme/lv_3 isize=2048 agcount=4, agsize=1310464 blks 2026-04-01T09:52:41.481 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 attr=2, projid32bit=1 2026-04-01T09:52:41.481 INFO:teuthology.orchestra.run.vm00.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-04-01T09:52:41.481 INFO:teuthology.orchestra.run.vm00.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-04-01T09:52:41.481 INFO:teuthology.orchestra.run.vm00.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-04-01T09:52:41.481 INFO:teuthology.orchestra.run.vm00.stdout: = sunit=0 swidth=0 blks 2026-04-01T09:52:41.481 INFO:teuthology.orchestra.run.vm00.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-04-01T09:52:41.481 INFO:teuthology.orchestra.run.vm00.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-04-01T09:52:41.481 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-04-01T09:52:41.481 INFO:teuthology.orchestra.run.vm00.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-04-01T09:52:41.486 INFO:teuthology.orchestra.run.vm00.stdout:Discarding blocks...Done. 2026-04-01T09:52:41.488 INFO:tasks.ceph:mount /dev/vg_nvme/lv_3 on ubuntu@vm00.local -o noatime 2026-04-01T09:52:41.488 DEBUG:teuthology.orchestra.run.vm00:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_3 /var/lib/ceph/osd/ceph-2 2026-04-01T09:52:41.562 DEBUG:teuthology.orchestra.run.vm00:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-2 2026-04-01T09:52:41.636 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/osd/ceph-3 2026-04-01T09:52:41.707 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'} 2026-04-01T09:52:41.707 INFO:tasks.ceph:role: osd.3 2026-04-01T09:52:41.707 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_4 on ubuntu@vm00.local 2026-04-01T09:52:41.707 DEBUG:teuthology.orchestra.run.vm00:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_4 2026-04-01T09:52:41.778 INFO:teuthology.orchestra.run.vm00.stdout:meta-data=/dev/vg_nvme/lv_4 isize=2048 agcount=4, agsize=1310464 blks 2026-04-01T09:52:41.778 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 attr=2, projid32bit=1 2026-04-01T09:52:41.778 INFO:teuthology.orchestra.run.vm00.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-04-01T09:52:41.778 INFO:teuthology.orchestra.run.vm00.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-04-01T09:52:41.778 INFO:teuthology.orchestra.run.vm00.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-04-01T09:52:41.778 INFO:teuthology.orchestra.run.vm00.stdout: = sunit=0 swidth=0 blks 2026-04-01T09:52:41.778 INFO:teuthology.orchestra.run.vm00.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-04-01T09:52:41.778 INFO:teuthology.orchestra.run.vm00.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-04-01T09:52:41.778 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-04-01T09:52:41.778 INFO:teuthology.orchestra.run.vm00.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-04-01T09:52:41.782 INFO:teuthology.orchestra.run.vm00.stdout:Discarding blocks...Done. 2026-04-01T09:52:41.784 INFO:tasks.ceph:mount /dev/vg_nvme/lv_4 on ubuntu@vm00.local -o noatime 2026-04-01T09:52:41.784 DEBUG:teuthology.orchestra.run.vm00:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_4 /var/lib/ceph/osd/ceph-3 2026-04-01T09:52:41.859 DEBUG:teuthology.orchestra.run.vm00:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-3 2026-04-01T09:52:41.926 DEBUG:teuthology.orchestra.run.vm00:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-04-01T09:52:42.007 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:52:42.006+0000 7fc30b652900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-0/keyring: can't open /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory 2026-04-01T09:52:42.008 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:52:42.006+0000 7fc30b652900 -1 created new key in keyring /var/lib/ceph/osd/ceph-0/keyring 2026-04-01T09:52:42.008 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:52:42.006+0000 7fc30b652900 -1 bdev(0x557be8199800 /var/lib/ceph/osd/ceph-0/block) open stat got: (1) Operation not permitted 2026-04-01T09:52:42.008 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:52:42.006+0000 7fc30b652900 -1 bluestore(/var/lib/ceph/osd/ceph-0) _read_fsid unparsable uuid 2026-04-01T09:52:42.745 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-04-01T09:52:42.816 DEBUG:teuthology.orchestra.run.vm00:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 1 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-04-01T09:52:42.900 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:52:42.899+0000 7f990abbe900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-1/keyring: can't open /var/lib/ceph/osd/ceph-1/keyring: (2) No such file or directory 2026-04-01T09:52:42.901 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:52:42.899+0000 7f990abbe900 -1 created new key in keyring /var/lib/ceph/osd/ceph-1/keyring 2026-04-01T09:52:42.901 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:52:42.899+0000 7f990abbe900 -1 bdev(0x560783cdf800 /var/lib/ceph/osd/ceph-1/block) open stat got: (1) Operation not permitted 2026-04-01T09:52:42.901 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:52:42.899+0000 7f990abbe900 -1 bluestore(/var/lib/ceph/osd/ceph-1) _read_fsid unparsable uuid 2026-04-01T09:52:43.633 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-04-01T09:52:43.664 DEBUG:teuthology.orchestra.run.vm00:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 2 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-04-01T09:52:43.758 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:52:43.757+0000 7fbeb03f1900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-2/keyring: can't open /var/lib/ceph/osd/ceph-2/keyring: (2) No such file or directory 2026-04-01T09:52:43.758 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:52:43.757+0000 7fbeb03f1900 -1 created new key in keyring /var/lib/ceph/osd/ceph-2/keyring 2026-04-01T09:52:43.758 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:52:43.757+0000 7fbeb03f1900 -1 bdev(0x555853ec1800 /var/lib/ceph/osd/ceph-2/block) open stat got: (1) Operation not permitted 2026-04-01T09:52:43.759 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:52:43.757+0000 7fbeb03f1900 -1 bluestore(/var/lib/ceph/osd/ceph-2) _read_fsid unparsable uuid 2026-04-01T09:52:44.495 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-04-01T09:52:44.525 DEBUG:teuthology.orchestra.run.vm00:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 3 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-04-01T09:52:44.613 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:52:44.612+0000 7f9bcccd6900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-3/keyring: can't open /var/lib/ceph/osd/ceph-3/keyring: (2) No such file or directory 2026-04-01T09:52:44.614 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:52:44.612+0000 7f9bcccd6900 -1 created new key in keyring /var/lib/ceph/osd/ceph-3/keyring 2026-04-01T09:52:44.614 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:52:44.612+0000 7f9bcccd6900 -1 bdev(0x5591a0d51800 /var/lib/ceph/osd/ceph-3/block) open stat got: (1) Operation not permitted 2026-04-01T09:52:44.614 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:52:44.612+0000 7f9bcccd6900 -1 bluestore(/var/lib/ceph/osd/ceph-3) _read_fsid unparsable uuid 2026-04-01T09:52:45.352 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-04-01T09:52:45.376 DEBUG:teuthology.orchestra.run.vm03:> sudo mkdir -p /var/lib/ceph/osd/ceph-4 2026-04-01T09:52:45.402 INFO:tasks.ceph:roles_to_devs: {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'} 2026-04-01T09:52:45.402 INFO:tasks.ceph:role: osd.4 2026-04-01T09:52:45.402 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_1 on ubuntu@vm03.local 2026-04-01T09:52:45.402 DEBUG:teuthology.orchestra.run.vm03:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_1 2026-04-01T09:52:45.474 INFO:teuthology.orchestra.run.vm03.stdout:meta-data=/dev/vg_nvme/lv_1 isize=2048 agcount=4, agsize=1310464 blks 2026-04-01T09:52:45.474 INFO:teuthology.orchestra.run.vm03.stdout: = sectsz=512 attr=2, projid32bit=1 2026-04-01T09:52:45.474 INFO:teuthology.orchestra.run.vm03.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-04-01T09:52:45.474 INFO:teuthology.orchestra.run.vm03.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-04-01T09:52:45.474 INFO:teuthology.orchestra.run.vm03.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-04-01T09:52:45.474 INFO:teuthology.orchestra.run.vm03.stdout: = sunit=0 swidth=0 blks 2026-04-01T09:52:45.474 INFO:teuthology.orchestra.run.vm03.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-04-01T09:52:45.474 INFO:teuthology.orchestra.run.vm03.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-04-01T09:52:45.474 INFO:teuthology.orchestra.run.vm03.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-04-01T09:52:45.474 INFO:teuthology.orchestra.run.vm03.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-04-01T09:52:45.478 INFO:teuthology.orchestra.run.vm03.stdout:Discarding blocks...Done. 2026-04-01T09:52:45.480 INFO:tasks.ceph:mount /dev/vg_nvme/lv_1 on ubuntu@vm03.local -o noatime 2026-04-01T09:52:45.481 DEBUG:teuthology.orchestra.run.vm03:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_1 /var/lib/ceph/osd/ceph-4 2026-04-01T09:52:45.554 DEBUG:teuthology.orchestra.run.vm03:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-4 2026-04-01T09:52:45.626 DEBUG:teuthology.orchestra.run.vm03:> sudo mkdir -p /var/lib/ceph/osd/ceph-5 2026-04-01T09:52:45.690 INFO:tasks.ceph:roles_to_devs: {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'} 2026-04-01T09:52:45.690 INFO:tasks.ceph:role: osd.5 2026-04-01T09:52:45.690 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_2 on ubuntu@vm03.local 2026-04-01T09:52:45.690 DEBUG:teuthology.orchestra.run.vm03:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_2 2026-04-01T09:52:45.753 INFO:teuthology.orchestra.run.vm03.stdout:meta-data=/dev/vg_nvme/lv_2 isize=2048 agcount=4, agsize=1310464 blks 2026-04-01T09:52:45.753 INFO:teuthology.orchestra.run.vm03.stdout: = sectsz=512 attr=2, projid32bit=1 2026-04-01T09:52:45.753 INFO:teuthology.orchestra.run.vm03.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-04-01T09:52:45.753 INFO:teuthology.orchestra.run.vm03.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-04-01T09:52:45.753 INFO:teuthology.orchestra.run.vm03.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-04-01T09:52:45.753 INFO:teuthology.orchestra.run.vm03.stdout: = sunit=0 swidth=0 blks 2026-04-01T09:52:45.753 INFO:teuthology.orchestra.run.vm03.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-04-01T09:52:45.753 INFO:teuthology.orchestra.run.vm03.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-04-01T09:52:45.753 INFO:teuthology.orchestra.run.vm03.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-04-01T09:52:45.753 INFO:teuthology.orchestra.run.vm03.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-04-01T09:52:45.758 INFO:teuthology.orchestra.run.vm03.stdout:Discarding blocks...Done. 2026-04-01T09:52:45.760 INFO:tasks.ceph:mount /dev/vg_nvme/lv_2 on ubuntu@vm03.local -o noatime 2026-04-01T09:52:45.760 DEBUG:teuthology.orchestra.run.vm03:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_2 /var/lib/ceph/osd/ceph-5 2026-04-01T09:52:45.833 DEBUG:teuthology.orchestra.run.vm03:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-5 2026-04-01T09:52:45.903 DEBUG:teuthology.orchestra.run.vm03:> sudo mkdir -p /var/lib/ceph/osd/ceph-6 2026-04-01T09:52:45.972 INFO:tasks.ceph:roles_to_devs: {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'} 2026-04-01T09:52:45.972 INFO:tasks.ceph:role: osd.6 2026-04-01T09:52:45.972 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_3 on ubuntu@vm03.local 2026-04-01T09:52:45.972 DEBUG:teuthology.orchestra.run.vm03:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_3 2026-04-01T09:52:46.042 INFO:teuthology.orchestra.run.vm03.stdout:meta-data=/dev/vg_nvme/lv_3 isize=2048 agcount=4, agsize=1310464 blks 2026-04-01T09:52:46.042 INFO:teuthology.orchestra.run.vm03.stdout: = sectsz=512 attr=2, projid32bit=1 2026-04-01T09:52:46.042 INFO:teuthology.orchestra.run.vm03.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-04-01T09:52:46.042 INFO:teuthology.orchestra.run.vm03.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-04-01T09:52:46.042 INFO:teuthology.orchestra.run.vm03.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-04-01T09:52:46.042 INFO:teuthology.orchestra.run.vm03.stdout: = sunit=0 swidth=0 blks 2026-04-01T09:52:46.042 INFO:teuthology.orchestra.run.vm03.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-04-01T09:52:46.042 INFO:teuthology.orchestra.run.vm03.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-04-01T09:52:46.042 INFO:teuthology.orchestra.run.vm03.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-04-01T09:52:46.042 INFO:teuthology.orchestra.run.vm03.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-04-01T09:52:46.047 INFO:teuthology.orchestra.run.vm03.stdout:Discarding blocks...Done. 2026-04-01T09:52:46.050 INFO:tasks.ceph:mount /dev/vg_nvme/lv_3 on ubuntu@vm03.local -o noatime 2026-04-01T09:52:46.050 DEBUG:teuthology.orchestra.run.vm03:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_3 /var/lib/ceph/osd/ceph-6 2026-04-01T09:52:46.128 DEBUG:teuthology.orchestra.run.vm03:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-6 2026-04-01T09:52:46.197 DEBUG:teuthology.orchestra.run.vm03:> sudo mkdir -p /var/lib/ceph/osd/ceph-7 2026-04-01T09:52:46.263 INFO:tasks.ceph:roles_to_devs: {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'} 2026-04-01T09:52:46.263 INFO:tasks.ceph:role: osd.7 2026-04-01T09:52:46.263 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_4 on ubuntu@vm03.local 2026-04-01T09:52:46.263 DEBUG:teuthology.orchestra.run.vm03:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_4 2026-04-01T09:52:46.330 INFO:teuthology.orchestra.run.vm03.stdout:meta-data=/dev/vg_nvme/lv_4 isize=2048 agcount=4, agsize=1310464 blks 2026-04-01T09:52:46.330 INFO:teuthology.orchestra.run.vm03.stdout: = sectsz=512 attr=2, projid32bit=1 2026-04-01T09:52:46.330 INFO:teuthology.orchestra.run.vm03.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-04-01T09:52:46.330 INFO:teuthology.orchestra.run.vm03.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-04-01T09:52:46.330 INFO:teuthology.orchestra.run.vm03.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-04-01T09:52:46.331 INFO:teuthology.orchestra.run.vm03.stdout: = sunit=0 swidth=0 blks 2026-04-01T09:52:46.331 INFO:teuthology.orchestra.run.vm03.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-04-01T09:52:46.331 INFO:teuthology.orchestra.run.vm03.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-04-01T09:52:46.331 INFO:teuthology.orchestra.run.vm03.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-04-01T09:52:46.331 INFO:teuthology.orchestra.run.vm03.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-04-01T09:52:46.337 INFO:teuthology.orchestra.run.vm03.stdout:Discarding blocks...Done. 2026-04-01T09:52:46.338 INFO:tasks.ceph:mount /dev/vg_nvme/lv_4 on ubuntu@vm03.local -o noatime 2026-04-01T09:52:46.338 DEBUG:teuthology.orchestra.run.vm03:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_4 /var/lib/ceph/osd/ceph-7 2026-04-01T09:52:46.409 DEBUG:teuthology.orchestra.run.vm03:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-7 2026-04-01T09:52:46.479 DEBUG:teuthology.orchestra.run.vm03:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 4 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-04-01T09:52:46.559 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:52:46.557+0000 7fd2f0e5c900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-4/keyring: can't open /var/lib/ceph/osd/ceph-4/keyring: (2) No such file or directory 2026-04-01T09:52:46.559 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:52:46.558+0000 7fd2f0e5c900 -1 created new key in keyring /var/lib/ceph/osd/ceph-4/keyring 2026-04-01T09:52:46.559 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:52:46.558+0000 7fd2f0e5c900 -1 bdev(0x56270405b800 /var/lib/ceph/osd/ceph-4/block) open stat got: (1) Operation not permitted 2026-04-01T09:52:46.559 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:52:46.558+0000 7fd2f0e5c900 -1 bluestore(/var/lib/ceph/osd/ceph-4) _read_fsid unparsable uuid 2026-04-01T09:52:47.212 DEBUG:teuthology.orchestra.run.vm03:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-04-01T09:52:47.277 DEBUG:teuthology.orchestra.run.vm03:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 5 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-04-01T09:52:47.358 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:52:47.356+0000 7f026a9b9900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-5/keyring: can't open /var/lib/ceph/osd/ceph-5/keyring: (2) No such file or directory 2026-04-01T09:52:47.358 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:52:47.356+0000 7f026a9b9900 -1 created new key in keyring /var/lib/ceph/osd/ceph-5/keyring 2026-04-01T09:52:47.358 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:52:47.356+0000 7f026a9b9900 -1 bdev(0x556856ef9800 /var/lib/ceph/osd/ceph-5/block) open stat got: (1) Operation not permitted 2026-04-01T09:52:47.358 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:52:47.356+0000 7f026a9b9900 -1 bluestore(/var/lib/ceph/osd/ceph-5) _read_fsid unparsable uuid 2026-04-01T09:52:48.116 DEBUG:teuthology.orchestra.run.vm03:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-04-01T09:52:48.182 DEBUG:teuthology.orchestra.run.vm03:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 6 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-04-01T09:52:48.263 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:52:48.261+0000 7fd214cb9900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-6/keyring: can't open /var/lib/ceph/osd/ceph-6/keyring: (2) No such file or directory 2026-04-01T09:52:48.263 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:52:48.261+0000 7fd214cb9900 -1 created new key in keyring /var/lib/ceph/osd/ceph-6/keyring 2026-04-01T09:52:48.263 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:52:48.261+0000 7fd214cb9900 -1 bdev(0x55cfce049800 /var/lib/ceph/osd/ceph-6/block) open stat got: (1) Operation not permitted 2026-04-01T09:52:48.263 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:52:48.261+0000 7fd214cb9900 -1 bluestore(/var/lib/ceph/osd/ceph-6) _read_fsid unparsable uuid 2026-04-01T09:52:49.713 DEBUG:teuthology.orchestra.run.vm03:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-6 2026-04-01T09:52:49.787 DEBUG:teuthology.orchestra.run.vm03:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 7 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-04-01T09:52:49.876 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:52:49.874+0000 7fcbe867b900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-7/keyring: can't open /var/lib/ceph/osd/ceph-7/keyring: (2) No such file or directory 2026-04-01T09:52:49.876 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:52:49.875+0000 7fcbe867b900 -1 created new key in keyring /var/lib/ceph/osd/ceph-7/keyring 2026-04-01T09:52:49.876 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:52:49.875+0000 7fcbe867b900 -1 bdev(0x55d3daf85800 /var/lib/ceph/osd/ceph-7/block) open stat got: (1) Operation not permitted 2026-04-01T09:52:49.877 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:52:49.875+0000 7fcbe867b900 -1 bluestore(/var/lib/ceph/osd/ceph-7) _read_fsid unparsable uuid 2026-04-01T09:52:50.547 DEBUG:teuthology.orchestra.run.vm03:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-7 2026-04-01T09:52:50.572 INFO:tasks.ceph:Reading keys from all nodes... 2026-04-01T09:52:50.572 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:52:50.572 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/mgr/ceph-y/keyring of=/dev/stdout 2026-04-01T09:52:50.604 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:52:50.604 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-0/keyring of=/dev/stdout 2026-04-01T09:52:50.672 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:52:50.672 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-1/keyring of=/dev/stdout 2026-04-01T09:52:50.738 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:52:50.738 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-2/keyring of=/dev/stdout 2026-04-01T09:52:50.800 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:52:50.800 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-3/keyring of=/dev/stdout 2026-04-01T09:52:50.866 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:52:50.866 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/var/lib/ceph/mgr/ceph-x/keyring of=/dev/stdout 2026-04-01T09:52:50.891 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:52:50.891 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/var/lib/ceph/osd/ceph-4/keyring of=/dev/stdout 2026-04-01T09:52:50.953 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:52:50.953 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/var/lib/ceph/osd/ceph-5/keyring of=/dev/stdout 2026-04-01T09:52:51.019 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:52:51.019 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/var/lib/ceph/osd/ceph-6/keyring of=/dev/stdout 2026-04-01T09:52:51.086 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:52:51.086 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/var/lib/ceph/osd/ceph-7/keyring of=/dev/stdout 2026-04-01T09:52:51.151 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:52:51.151 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.client.0.keyring of=/dev/stdout 2026-04-01T09:52:51.170 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:52:51.170 DEBUG:teuthology.orchestra.run.vm03:> dd if=/etc/ceph/ceph.client.1.keyring of=/dev/stdout 2026-04-01T09:52:51.206 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-04-01T09:52:51.206 DEBUG:teuthology.orchestra.run.vm07:> dd if=/etc/ceph/ceph.client.2.keyring of=/dev/stdout 2026-04-01T09:52:51.227 INFO:tasks.ceph:Adding keys to all mons... 2026-04-01T09:52:51.227 DEBUG:teuthology.orchestra.run.vm00:> sudo tee -a /etc/ceph/ceph.keyring 2026-04-01T09:52:51.229 DEBUG:teuthology.orchestra.run.vm03:> sudo tee -a /etc/ceph/ceph.keyring 2026-04-01T09:52:51.254 INFO:teuthology.orchestra.run.vm00.stdout:[mgr.y] 2026-04-01T09:52:51.254 INFO:teuthology.orchestra.run.vm00.stdout: key = AQDo6sxpdh5IIhAATRVRtwWO4g40watP/XYK5w== 2026-04-01T09:52:51.254 INFO:teuthology.orchestra.run.vm00.stdout:[osd.0] 2026-04-01T09:52:51.254 INFO:teuthology.orchestra.run.vm00.stdout: key = AQDq6sxpKuh1ABAA/E8h+S9LJj6RQPfL/hThfA== 2026-04-01T09:52:51.254 INFO:teuthology.orchestra.run.vm00.stdout:[osd.1] 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout: key = AQDq6sxpY6CyNRAA3Owz8wR1PCt4z7DlEnPefw== 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout:[osd.2] 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout: key = AQDr6sxp+9E5LRAAtVKxN9H/FayISE9WdOhDaQ== 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout:[osd.3] 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout: key = AQDs6sxpVduUJBAAbSR9ZF1S71YalSpqTytX/A== 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout:[mgr.x] 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout: key = AQDo6sxpxOxXJRAAq1LwW/PWSuTET3DTmJivwA== 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout:[osd.4] 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout: key = AQDu6sxpyc9VIRAAhZ0TRL8feQV6B8yvrjoFrw== 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout:[osd.5] 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout: key = AQDv6sxpAsRUFRAA/MsJzY64SgVMwNMYoIndkA== 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout:[osd.6] 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout: key = AQDw6sxpzmKrDxAAnQFE9W5JRzGuU/TFzV/2SQ== 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout:[osd.7] 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout: key = AQDx6sxpU689NBAAtAVZNBYEu1d/0ridtamVDA== 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout:[client.0] 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout: key = AQDo6sxpGr8OKBAAXoe57qKjTxNASIGoz8/uHA== 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout:[client.1] 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout: key = AQDo6sxpa1FhKxAAsoquxDrdx2PCODddezzpfg== 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout:[client.2] 2026-04-01T09:52:51.255 INFO:teuthology.orchestra.run.vm00.stdout: key = AQDo6sxpKFWgLhAAk75Hq3mnGSqEit2qXU91cQ== 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout:[mgr.y] 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout: key = AQDo6sxpdh5IIhAATRVRtwWO4g40watP/XYK5w== 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout:[osd.0] 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout: key = AQDq6sxpKuh1ABAA/E8h+S9LJj6RQPfL/hThfA== 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout:[osd.1] 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout: key = AQDq6sxpY6CyNRAA3Owz8wR1PCt4z7DlEnPefw== 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout:[osd.2] 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout: key = AQDr6sxp+9E5LRAAtVKxN9H/FayISE9WdOhDaQ== 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout:[osd.3] 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout: key = AQDs6sxpVduUJBAAbSR9ZF1S71YalSpqTytX/A== 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout:[mgr.x] 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout: key = AQDo6sxpxOxXJRAAq1LwW/PWSuTET3DTmJivwA== 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout:[osd.4] 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout: key = AQDu6sxpyc9VIRAAhZ0TRL8feQV6B8yvrjoFrw== 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout:[osd.5] 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout: key = AQDv6sxpAsRUFRAA/MsJzY64SgVMwNMYoIndkA== 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout:[osd.6] 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout: key = AQDw6sxpzmKrDxAAnQFE9W5JRzGuU/TFzV/2SQ== 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout:[osd.7] 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout: key = AQDx6sxpU689NBAAtAVZNBYEu1d/0ridtamVDA== 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout:[client.0] 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout: key = AQDo6sxpGr8OKBAAXoe57qKjTxNASIGoz8/uHA== 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout:[client.1] 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout: key = AQDo6sxpa1FhKxAAsoquxDrdx2PCODddezzpfg== 2026-04-01T09:52:51.270 INFO:teuthology.orchestra.run.vm03.stdout:[client.2] 2026-04-01T09:52:51.271 INFO:teuthology.orchestra.run.vm03.stdout: key = AQDo6sxpKFWgLhAAk75Hq3mnGSqEit2qXU91cQ== 2026-04-01T09:52:51.271 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=mgr.y --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 2026-04-01T09:52:51.297 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=mgr.y --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 2026-04-01T09:52:51.356 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.0 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-04-01T09:52:51.357 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.0 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-04-01T09:52:51.400 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.1 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-04-01T09:52:51.401 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.1 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-04-01T09:52:51.488 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.2 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-04-01T09:52:51.489 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.2 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-04-01T09:52:51.538 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.3 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-04-01T09:52:51.539 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.3 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-04-01T09:52:51.590 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=mgr.x --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 2026-04-01T09:52:51.591 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=mgr.x --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 2026-04-01T09:52:51.637 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.4 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-04-01T09:52:51.638 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.4 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-04-01T09:52:51.683 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.5 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-04-01T09:52:51.685 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.5 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-04-01T09:52:51.732 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.6 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-04-01T09:52:51.733 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.6 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-04-01T09:52:51.783 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.7 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-04-01T09:52:51.784 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.7 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-04-01T09:52:51.834 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.0 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-04-01T09:52:51.835 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.0 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-04-01T09:52:51.886 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.1 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-04-01T09:52:51.887 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.1 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-04-01T09:52:51.939 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.2 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-04-01T09:52:51.941 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.2 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-04-01T09:52:51.992 INFO:tasks.ceph:Running mkfs on mon nodes... 2026-04-01T09:52:51.992 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/mon/ceph-a 2026-04-01T09:52:52.018 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-mon --cluster ceph --mkfs -i a --monmap /home/ubuntu/cephtest/ceph.monmap --keyring /etc/ceph/ceph.keyring 2026-04-01T09:52:52.119 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/mon/ceph-a 2026-04-01T09:52:52.147 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/mon/ceph-c 2026-04-01T09:52:52.214 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-mon --cluster ceph --mkfs -i c --monmap /home/ubuntu/cephtest/ceph.monmap --keyring /etc/ceph/ceph.keyring 2026-04-01T09:52:52.315 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/mon/ceph-c 2026-04-01T09:52:52.344 DEBUG:teuthology.orchestra.run.vm03:> sudo mkdir -p /var/lib/ceph/mon/ceph-b 2026-04-01T09:52:52.374 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-mon --cluster ceph --mkfs -i b --monmap /home/ubuntu/cephtest/ceph.monmap --keyring /etc/ceph/ceph.keyring 2026-04-01T09:52:52.474 DEBUG:teuthology.orchestra.run.vm03:> sudo chown -R ceph:ceph /var/lib/ceph/mon/ceph-b 2026-04-01T09:52:52.502 DEBUG:teuthology.orchestra.run.vm00:> rm -- /home/ubuntu/cephtest/ceph.monmap 2026-04-01T09:52:52.504 DEBUG:teuthology.orchestra.run.vm03:> rm -- /home/ubuntu/cephtest/ceph.monmap 2026-04-01T09:52:52.561 INFO:tasks.ceph:Starting mon daemons in cluster ceph... 2026-04-01T09:52:52.561 INFO:tasks.ceph.mon.a:Restarting daemon 2026-04-01T09:52:52.562 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i a 2026-04-01T09:52:52.563 INFO:tasks.ceph.mon.a:Started 2026-04-01T09:52:52.563 INFO:tasks.ceph.mon.c:Restarting daemon 2026-04-01T09:52:52.563 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i c 2026-04-01T09:52:52.565 INFO:tasks.ceph.mon.c:Started 2026-04-01T09:52:52.565 INFO:tasks.ceph.mon.b:Restarting daemon 2026-04-01T09:52:52.565 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i b 2026-04-01T09:52:52.603 INFO:tasks.ceph.mon.b:Started 2026-04-01T09:52:52.604 INFO:tasks.ceph:Starting mgr daemons in cluster ceph... 2026-04-01T09:52:52.604 INFO:tasks.ceph.mgr.y:Restarting daemon 2026-04-01T09:52:52.604 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mgr -f --cluster ceph -i y 2026-04-01T09:52:52.605 INFO:tasks.ceph.mgr.y:Started 2026-04-01T09:52:52.605 INFO:tasks.ceph.mgr.x:Restarting daemon 2026-04-01T09:52:52.605 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mgr -f --cluster ceph -i x 2026-04-01T09:52:52.606 INFO:tasks.ceph.mgr.x:Started 2026-04-01T09:52:52.607 DEBUG:tasks.ceph:set 0 configs 2026-04-01T09:52:52.607 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph --cluster ceph config dump 2026-04-01T09:52:57.925 INFO:teuthology.orchestra.run.vm00.stdout:WHO MASK LEVEL OPTION VALUE RO 2026-04-01T09:52:57.935 INFO:tasks.ceph:Setting crush tunables to default 2026-04-01T09:52:57.935 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph --cluster ceph osd crush tunables default 2026-04-01T09:52:58.057 INFO:teuthology.orchestra.run.vm00.stderr:adjusted tunables profile to default 2026-04-01T09:52:58.069 INFO:tasks.ceph:check_enable_crimson: False 2026-04-01T09:52:58.069 INFO:tasks.ceph:Starting osd daemons in cluster ceph... 2026-04-01T09:52:58.069 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:52:58.069 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-0/fsid of=/dev/stdout 2026-04-01T09:52:58.100 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:52:58.100 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-1/fsid of=/dev/stdout 2026-04-01T09:52:58.168 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:52:58.168 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-2/fsid of=/dev/stdout 2026-04-01T09:52:58.234 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:52:58.234 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-3/fsid of=/dev/stdout 2026-04-01T09:52:58.299 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:52:58.299 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/var/lib/ceph/osd/ceph-4/fsid of=/dev/stdout 2026-04-01T09:52:58.326 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:52:58.326 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/var/lib/ceph/osd/ceph-5/fsid of=/dev/stdout 2026-04-01T09:52:58.397 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:52:58.397 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/var/lib/ceph/osd/ceph-6/fsid of=/dev/stdout 2026-04-01T09:52:58.460 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-04-01T09:52:58.460 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/var/lib/ceph/osd/ceph-7/fsid of=/dev/stdout 2026-04-01T09:52:58.528 DEBUG:teuthology.orchestra.run.vm03:> sudo ceph --cluster ceph osd new 03a36eec-286a-4abd-b74f-adadfd5f6866 0 2026-04-01T09:52:58.638 INFO:tasks.ceph.mgr.x.vm03.stderr:/usr/lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-04-01T09:52:58.638 INFO:tasks.ceph.mgr.x.vm03.stderr:Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-04-01T09:52:58.638 INFO:tasks.ceph.mgr.x.vm03.stderr: from numpy import show_config as show_numpy_config 2026-04-01T09:52:58.693 INFO:teuthology.orchestra.run.vm03.stdout:0 2026-04-01T09:52:58.704 DEBUG:teuthology.orchestra.run.vm03:> sudo ceph --cluster ceph osd new 14fafcf0-3ba7-491c-8da7-5f87cfc5dd98 1 2026-04-01T09:52:58.827 INFO:teuthology.orchestra.run.vm03.stdout:1 2026-04-01T09:52:58.838 DEBUG:teuthology.orchestra.run.vm03:> sudo ceph --cluster ceph osd new 8d7430ae-17ba-409a-876e-94433116c19a 2 2026-04-01T09:52:58.962 INFO:teuthology.orchestra.run.vm03.stdout:2 2026-04-01T09:52:58.972 DEBUG:teuthology.orchestra.run.vm03:> sudo ceph --cluster ceph osd new b977fa81-bf62-4126-b820-2f6b40ca2a3a 3 2026-04-01T09:52:59.097 INFO:teuthology.orchestra.run.vm03.stdout:3 2026-04-01T09:52:59.107 DEBUG:teuthology.orchestra.run.vm03:> sudo ceph --cluster ceph osd new e0357523-5fe6-45a1-8098-ff2effc845fb 4 2026-04-01T09:52:59.228 INFO:teuthology.orchestra.run.vm03.stdout:4 2026-04-01T09:52:59.238 DEBUG:teuthology.orchestra.run.vm03:> sudo ceph --cluster ceph osd new 73a6b698-0034-4df8-9d43-7d94523b0b3d 5 2026-04-01T09:52:59.362 INFO:teuthology.orchestra.run.vm03.stdout:5 2026-04-01T09:52:59.372 DEBUG:teuthology.orchestra.run.vm03:> sudo ceph --cluster ceph osd new 55819012-c4f8-492c-b942-f50004304417 6 2026-04-01T09:52:59.478 INFO:tasks.ceph.mgr.y.vm00.stderr:/usr/lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-04-01T09:52:59.478 INFO:tasks.ceph.mgr.y.vm00.stderr:Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-04-01T09:52:59.478 INFO:tasks.ceph.mgr.y.vm00.stderr: from numpy import show_config as show_numpy_config 2026-04-01T09:52:59.506 INFO:teuthology.orchestra.run.vm03.stdout:6 2026-04-01T09:52:59.515 DEBUG:teuthology.orchestra.run.vm03:> sudo ceph --cluster ceph osd new c473149c-9ca7-4e55-a8f0-10087620988f 7 2026-04-01T09:52:59.649 INFO:teuthology.orchestra.run.vm03.stdout:7 2026-04-01T09:52:59.658 INFO:tasks.ceph.osd.0:Restarting daemon 2026-04-01T09:52:59.658 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 0 2026-04-01T09:52:59.660 INFO:tasks.ceph.osd.0:Started 2026-04-01T09:52:59.660 INFO:tasks.ceph.osd.1:Restarting daemon 2026-04-01T09:52:59.660 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1 2026-04-01T09:52:59.662 INFO:tasks.ceph.osd.1:Started 2026-04-01T09:52:59.662 INFO:tasks.ceph.osd.2:Restarting daemon 2026-04-01T09:52:59.662 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 2 2026-04-01T09:52:59.663 INFO:tasks.ceph.osd.2:Started 2026-04-01T09:52:59.663 INFO:tasks.ceph.osd.3:Restarting daemon 2026-04-01T09:52:59.663 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 3 2026-04-01T09:52:59.666 INFO:tasks.ceph.osd.3:Started 2026-04-01T09:52:59.666 INFO:tasks.ceph.osd.4:Restarting daemon 2026-04-01T09:52:59.666 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 4 2026-04-01T09:52:59.668 INFO:tasks.ceph.osd.4:Started 2026-04-01T09:52:59.668 INFO:tasks.ceph.osd.5:Restarting daemon 2026-04-01T09:52:59.668 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 5 2026-04-01T09:52:59.669 INFO:tasks.ceph.osd.5:Started 2026-04-01T09:52:59.669 INFO:tasks.ceph.osd.6:Restarting daemon 2026-04-01T09:52:59.669 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 6 2026-04-01T09:52:59.671 INFO:tasks.ceph.osd.6:Started 2026-04-01T09:52:59.671 INFO:tasks.ceph.osd.7:Restarting daemon 2026-04-01T09:52:59.671 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 7 2026-04-01T09:52:59.675 INFO:tasks.ceph.osd.7:Started 2026-04-01T09:52:59.675 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-04-01T09:52:59.839 INFO:tasks.ceph.osd.4.vm03.stderr:2026-04-01T09:52:59.837+0000 7f9b79f04900 -1 Falling back to public interface 2026-04-01T09:52:59.843 INFO:tasks.ceph.osd.6.vm03.stderr:2026-04-01T09:52:59.842+0000 7f9fc292c900 -1 Falling back to public interface 2026-04-01T09:52:59.850 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:52:59.850 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":10,"fsid":"9d6defe0-49c7-414d-80c7-4853a2cdd635","created":"2026-04-01T09:52:57.866853+0000","modified":"2026-04-01T09:52:59.643938+0000","last_up_change":"0.000000","last_in_change":"2026-04-01T09:52:59.643938+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":2,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"03a36eec-286a-4abd-b74f-adadfd5f6866","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":1,"uuid":"14fafcf0-3ba7-491c-8da7-5f87cfc5dd98","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":2,"uuid":"8d7430ae-17ba-409a-876e-94433116c19a","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":3,"uuid":"b977fa81-bf62-4126-b820-2f6b40ca2a3a","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":4,"uuid":"e0357523-5fe6-45a1-8098-ff2effc845fb","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":5,"uuid":"73a6b698-0034-4df8-9d43-7d94523b0b3d","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":6,"uuid":"55819012-c4f8-492c-b942-f50004304417","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":7,"uuid":"c473149c-9ca7-4e55-a8f0-10087620988f","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-04-01T09:52:59.857 INFO:tasks.ceph.osd.7.vm03.stderr:2026-04-01T09:52:59.856+0000 7f71f1edf900 -1 Falling back to public interface 2026-04-01T09:52:59.858 INFO:tasks.ceph.osd.5.vm03.stderr:2026-04-01T09:52:59.857+0000 7f9268f82900 -1 Falling back to public interface 2026-04-01T09:52:59.863 INFO:tasks.ceph.ceph_manager.ceph:[] 2026-04-01T09:52:59.863 INFO:tasks.ceph:Waiting for OSDs to come up 2026-04-01T09:52:59.876 INFO:tasks.ceph.osd.2.vm00.stderr:2026-04-01T09:52:59.873+0000 7fd3dd80c900 -1 Falling back to public interface 2026-04-01T09:52:59.876 INFO:tasks.ceph.osd.2.vm00.stderr:2026-04-01T09:52:59.873+0000 7fd3dbfd1640 -1 PosixStack listen unable to listen on v2:0.0.0.0:6802/0: (98) Address already in use 2026-04-01T09:52:59.876 INFO:tasks.ceph.osd.3.vm00.stderr:2026-04-01T09:52:59.873+0000 7fa988ed1900 -1 Falling back to public interface 2026-04-01T09:52:59.885 INFO:tasks.ceph.osd.0.vm00.stderr:2026-04-01T09:52:59.883+0000 7f6f8a9d7900 -1 Falling back to public interface 2026-04-01T09:52:59.902 INFO:tasks.ceph.osd.1.vm00.stderr:2026-04-01T09:52:59.900+0000 7f3f8f7d7900 -1 Falling back to public interface 2026-04-01T09:53:00.199 INFO:tasks.ceph.osd.4.vm03.stderr:2026-04-01T09:53:00.197+0000 7f9b79f04900 -1 osd.4 0 log_to_monitors true 2026-04-01T09:53:00.230 INFO:tasks.ceph.osd.6.vm03.stderr:2026-04-01T09:53:00.228+0000 7f9fc292c900 -1 osd.6 0 log_to_monitors true 2026-04-01T09:53:00.266 INFO:tasks.ceph.osd.7.vm03.stderr:2026-04-01T09:53:00.264+0000 7f71f1edf900 -1 osd.7 0 log_to_monitors true 2026-04-01T09:53:00.304 INFO:tasks.ceph.osd.5.vm03.stderr:2026-04-01T09:53:00.302+0000 7f9268f82900 -1 osd.5 0 log_to_monitors true 2026-04-01T09:53:00.382 INFO:tasks.ceph.osd.3.vm00.stderr:2026-04-01T09:53:00.380+0000 7fa988ed1900 -1 osd.3 0 log_to_monitors true 2026-04-01T09:53:00.421 INFO:tasks.ceph.osd.0.vm00.stderr:2026-04-01T09:53:00.420+0000 7f6f8a9d7900 -1 osd.0 0 log_to_monitors true 2026-04-01T09:53:00.442 INFO:tasks.ceph.osd.2.vm00.stderr:2026-04-01T09:53:00.441+0000 7fd3dd80c900 -1 osd.2 0 log_to_monitors true 2026-04-01T09:53:00.452 INFO:tasks.ceph.osd.1.vm00.stderr:2026-04-01T09:53:00.450+0000 7f3f8f7d7900 -1 osd.1 0 log_to_monitors true 2026-04-01T09:53:00.669 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json 2026-04-01T09:53:00.773 INFO:teuthology.misc.health.vm00.stdout: 2026-04-01T09:53:00.773 INFO:teuthology.misc.health.vm00.stdout:{"epoch":10,"fsid":"9d6defe0-49c7-414d-80c7-4853a2cdd635","created":"2026-04-01T09:52:57.866853+0000","modified":"2026-04-01T09:52:59.643938+0000","last_up_change":"0.000000","last_in_change":"2026-04-01T09:52:59.643938+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":2,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"03a36eec-286a-4abd-b74f-adadfd5f6866","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":1,"uuid":"14fafcf0-3ba7-491c-8da7-5f87cfc5dd98","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":2,"uuid":"8d7430ae-17ba-409a-876e-94433116c19a","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":3,"uuid":"b977fa81-bf62-4126-b820-2f6b40ca2a3a","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":4,"uuid":"e0357523-5fe6-45a1-8098-ff2effc845fb","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":5,"uuid":"73a6b698-0034-4df8-9d43-7d94523b0b3d","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":6,"uuid":"55819012-c4f8-492c-b942-f50004304417","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":7,"uuid":"c473149c-9ca7-4e55-a8f0-10087620988f","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-04-01T09:53:00.779 DEBUG:teuthology.misc:0 of 8 OSDs are up 2026-04-01T09:53:01.547 INFO:tasks.ceph.mgr.x.vm03.stderr:2026-04-01T09:53:01.545+0000 7fa71a876640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-04-01T09:53:01.547 INFO:tasks.ceph.mgr.x.vm03.stderr:2026-04-01T09:53:01.546+0000 7fa71a876640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-04-01T09:53:05.857 INFO:tasks.ceph.osd.5.vm03.stderr:2026-04-01T09:53:05.855+0000 7f9264f10640 -1 osd.5 0 waiting for initial osdmap 2026-04-01T09:53:05.869 INFO:tasks.ceph.osd.5.vm03.stderr:2026-04-01T09:53:05.868+0000 7f925fd14640 -1 osd.5 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-04-01T09:53:06.057 INFO:tasks.ceph.osd.4.vm03.stderr:2026-04-01T09:53:06.055+0000 7f9b75e94640 -1 osd.4 0 waiting for initial osdmap 2026-04-01T09:53:06.066 INFO:tasks.ceph.osd.4.vm03.stderr:2026-04-01T09:53:06.064+0000 7f9b70c98640 -1 osd.4 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-04-01T09:53:06.130 INFO:tasks.ceph.osd.7.vm03.stderr:2026-04-01T09:53:06.129+0000 7f71ee67f640 -1 osd.7 0 waiting for initial osdmap 2026-04-01T09:53:06.139 INFO:tasks.ceph.osd.6.vm03.stderr:2026-04-01T09:53:06.137+0000 7f9fbe8bc640 -1 osd.6 0 waiting for initial osdmap 2026-04-01T09:53:06.141 INFO:tasks.ceph.osd.7.vm03.stderr:2026-04-01T09:53:06.140+0000 7f71e8c71640 -1 osd.7 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-04-01T09:53:06.155 INFO:tasks.ceph.osd.6.vm03.stderr:2026-04-01T09:53:06.153+0000 7f9fb96c0640 -1 osd.6 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-04-01T09:53:06.284 INFO:tasks.ceph.osd.2.vm00.stderr:2026-04-01T09:53:06.283+0000 7fd3d97cc640 -1 osd.2 0 waiting for initial osdmap 2026-04-01T09:53:06.301 INFO:tasks.ceph.osd.2.vm00.stderr:2026-04-01T09:53:06.300+0000 7fd3d3dbe640 -1 osd.2 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-04-01T09:53:06.321 INFO:tasks.ceph.osd.1.vm00.stderr:2026-04-01T09:53:06.320+0000 7f3f8b765640 -1 osd.1 0 waiting for initial osdmap 2026-04-01T09:53:06.333 INFO:tasks.ceph.osd.1.vm00.stderr:2026-04-01T09:53:06.331+0000 7f3f86569640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-04-01T09:53:06.391 INFO:tasks.ceph.osd.0.vm00.stderr:2026-04-01T09:53:06.389+0000 7f6f86965640 -1 osd.0 0 waiting for initial osdmap 2026-04-01T09:53:06.402 INFO:tasks.ceph.osd.0.vm00.stderr:2026-04-01T09:53:06.401+0000 7f6f81769640 -1 osd.0 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-04-01T09:53:06.403 INFO:tasks.ceph.osd.3.vm00.stderr:2026-04-01T09:53:06.401+0000 7fa985671640 -1 osd.3 0 waiting for initial osdmap 2026-04-01T09:53:06.422 INFO:tasks.ceph.osd.3.vm00.stderr:2026-04-01T09:53:06.421+0000 7fa97fc63640 -1 osd.3 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-04-01T09:53:07.583 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json 2026-04-01T09:53:07.811 INFO:teuthology.misc.health.vm00.stdout: 2026-04-01T09:53:07.812 INFO:teuthology.misc.health.vm00.stdout:{"epoch":13,"fsid":"9d6defe0-49c7-414d-80c7-4853a2cdd635","created":"2026-04-01T09:52:57.866853+0000","modified":"2026-04-01T09:53:06.861873+0000","last_up_change":"2026-04-01T09:53:06.861873+0000","last_in_change":"2026-04-01T09:52:59.643938+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"03a36eec-286a-4abd-b74f-adadfd5f6866","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":404159356},{"type":"v1","addr":"192.168.123.100:6817","nonce":404159356}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":404159356},{"type":"v1","addr":"192.168.123.100:6819","nonce":404159356}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":404159356},{"type":"v1","addr":"192.168.123.100:6823","nonce":404159356}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":404159356},{"type":"v1","addr":"192.168.123.100:6821","nonce":404159356}]},"public_addr":"192.168.123.100:6817/404159356","cluster_addr":"192.168.123.100:6819/404159356","heartbeat_back_addr":"192.168.123.100:6823/404159356","heartbeat_front_addr":"192.168.123.100:6821/404159356","state":["exists","up"]},{"osd":1,"uuid":"14fafcf0-3ba7-491c-8da7-5f87cfc5dd98","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":3051551160},{"type":"v1","addr":"192.168.123.100:6825","nonce":3051551160}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":3051551160},{"type":"v1","addr":"192.168.123.100:6827","nonce":3051551160}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":3051551160},{"type":"v1","addr":"192.168.123.100:6831","nonce":3051551160}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":3051551160},{"type":"v1","addr":"192.168.123.100:6829","nonce":3051551160}]},"public_addr":"192.168.123.100:6825/3051551160","cluster_addr":"192.168.123.100:6827/3051551160","heartbeat_back_addr":"192.168.123.100:6831/3051551160","heartbeat_front_addr":"192.168.123.100:6829/3051551160","state":["exists","up"]},{"osd":2,"uuid":"8d7430ae-17ba-409a-876e-94433116c19a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6803","nonce":809661050},{"type":"v1","addr":"192.168.123.100:6805","nonce":809661050}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6807","nonce":809661050},{"type":"v1","addr":"192.168.123.100:6809","nonce":809661050}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":809661050},{"type":"v1","addr":"192.168.123.100:6815","nonce":809661050}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6811","nonce":809661050},{"type":"v1","addr":"192.168.123.100:6813","nonce":809661050}]},"public_addr":"192.168.123.100:6805/809661050","cluster_addr":"192.168.123.100:6809/809661050","heartbeat_back_addr":"192.168.123.100:6815/809661050","heartbeat_front_addr":"192.168.123.100:6813/809661050","state":["exists","up"]},{"osd":3,"uuid":"b977fa81-bf62-4126-b820-2f6b40ca2a3a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6800","nonce":248593229},{"type":"v1","addr":"192.168.123.100:6801","nonce":248593229}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":248593229},{"type":"v1","addr":"192.168.123.100:6804","nonce":248593229}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":248593229},{"type":"v1","addr":"192.168.123.100:6812","nonce":248593229}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":248593229},{"type":"v1","addr":"192.168.123.100:6808","nonce":248593229}]},"public_addr":"192.168.123.100:6801/248593229","cluster_addr":"192.168.123.100:6804/248593229","heartbeat_back_addr":"192.168.123.100:6812/248593229","heartbeat_front_addr":"192.168.123.100:6808/248593229","state":["exists","up"]},{"osd":4,"uuid":"e0357523-5fe6-45a1-8098-ff2effc845fb","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6800","nonce":1269558049},{"type":"v1","addr":"192.168.123.103:6801","nonce":1269558049}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":1269558049},{"type":"v1","addr":"192.168.123.103:6803","nonce":1269558049}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":1269558049},{"type":"v1","addr":"192.168.123.103:6807","nonce":1269558049}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":1269558049},{"type":"v1","addr":"192.168.123.103:6805","nonce":1269558049}]},"public_addr":"192.168.123.103:6801/1269558049","cluster_addr":"192.168.123.103:6803/1269558049","heartbeat_back_addr":"192.168.123.103:6807/1269558049","heartbeat_front_addr":"192.168.123.103:6805/1269558049","state":["exists","up"]},{"osd":5,"uuid":"73a6b698-0034-4df8-9d43-7d94523b0b3d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6818","nonce":2758364680},{"type":"v1","addr":"192.168.123.103:6820","nonce":2758364680}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6822","nonce":2758364680},{"type":"v1","addr":"192.168.123.103:6824","nonce":2758364680}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6829","nonce":2758364680},{"type":"v1","addr":"192.168.123.103:6831","nonce":2758364680}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6826","nonce":2758364680},{"type":"v1","addr":"192.168.123.103:6827","nonce":2758364680}]},"public_addr":"192.168.123.103:6820/2758364680","cluster_addr":"192.168.123.103:6824/2758364680","heartbeat_back_addr":"192.168.123.103:6831/2758364680","heartbeat_front_addr":"192.168.123.103:6827/2758364680","state":["exists","up"]},{"osd":6,"uuid":"55819012-c4f8-492c-b942-f50004304417","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":1036159278},{"type":"v1","addr":"192.168.123.103:6809","nonce":1036159278}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":1036159278},{"type":"v1","addr":"192.168.123.103:6811","nonce":1036159278}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":1036159278},{"type":"v1","addr":"192.168.123.103:6815","nonce":1036159278}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":1036159278},{"type":"v1","addr":"192.168.123.103:6813","nonce":1036159278}]},"public_addr":"192.168.123.103:6809/1036159278","cluster_addr":"192.168.123.103:6811/1036159278","heartbeat_back_addr":"192.168.123.103:6815/1036159278","heartbeat_front_addr":"192.168.123.103:6813/1036159278","state":["exists","up"]},{"osd":7,"uuid":"c473149c-9ca7-4e55-a8f0-10087620988f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6816","nonce":994620887},{"type":"v1","addr":"192.168.123.103:6817","nonce":994620887}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6819","nonce":994620887},{"type":"v1","addr":"192.168.123.103:6821","nonce":994620887}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6828","nonce":994620887},{"type":"v1","addr":"192.168.123.103:6830","nonce":994620887}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6823","nonce":994620887},{"type":"v1","addr":"192.168.123.103:6825","nonce":994620887}]},"public_addr":"192.168.123.103:6817/994620887","cluster_addr":"192.168.123.103:6821/994620887","heartbeat_back_addr":"192.168.123.103:6830/994620887","heartbeat_front_addr":"192.168.123.103:6825/994620887","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-04-01T09:53:07.823 DEBUG:teuthology.misc:8 of 8 OSDs are up 2026-04-01T09:53:07.823 INFO:tasks.ceph:Creating RBD pool 2026-04-01T09:53:07.823 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph --cluster ceph osd pool create rbd 8 2026-04-01T09:53:08.892 INFO:teuthology.orchestra.run.vm00.stderr:pool 'rbd' created 2026-04-01T09:53:08.908 DEBUG:teuthology.orchestra.run.vm00:> rbd --cluster ceph pool init rbd 2026-04-01T09:53:08.946 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-04-01T09:53:08.946 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-04-01T09:53:11.913 INFO:tasks.ceph:Starting mds daemons in cluster ceph... 2026-04-01T09:53:11.913 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph config log 1 --format=json 2026-04-01T09:53:11.913 INFO:tasks.daemonwatchdog.daemon_watchdog:watchdog starting 2026-04-01T09:53:12.169 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:53:12.181 INFO:teuthology.orchestra.run.vm00.stdout:[{"version":9,"timestamp":"2026-04-01T09:53:06.403487+0000","name":"","changes":[{"name":"osd.3/osd_mclock_max_capacity_iops_hdd","new_value":"4247.416388"}]}] 2026-04-01T09:53:12.181 INFO:tasks.ceph_manager:config epoch is 9 2026-04-01T09:53:12.181 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-04-01T09:53:12.181 INFO:tasks.ceph.ceph_manager.ceph:waiting for mgr available 2026-04-01T09:53:12.181 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph mgr dump --format=json 2026-04-01T09:53:12.453 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:53:12.466 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":5,"flags":0,"active_gid":4105,"active_name":"x","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6832","nonce":4265559050},{"type":"v1","addr":"192.168.123.103:6833","nonce":4265559050}]},"active_addr":"192.168.123.103:6833/4265559050","active_change":"2026-04-01T09:53:00.527929+0000","active_mgr_features":4541880224203014143,"available":true,"standbys":[{"gid":4106,"name":"y","mgr_features":4541880224203014143,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to, use commas to separate multiple","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"certificate_automated_rotation_enabled":{"name":"certificate_automated_rotation_enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"This flag controls whether cephadm automatically rotates certificates upon expiration.","long_desc":"","tags":[],"see_also":[]},"certificate_check_debug_mode":{"name":"certificate_check_debug_mode","type":"bool","level":"dev","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"FOR TESTING ONLY: This flag forces the certificate check instead of waiting for certificate_check_period.","long_desc":"","tags":[],"see_also":[]},"certificate_check_period":{"name":"certificate_check_period","type":"int","level":"advanced","flags":0,"default_value":"1","min":"0","max":"30","enum_allowed":[],"desc":"Specifies how often (in days) the certificate should be checked for validity.","long_desc":"","tags":[],"see_also":[]},"certificate_duration_days":{"name":"certificate_duration_days","type":"int","level":"advanced","flags":0,"default_value":"1095","min":"90","max":"3650","enum_allowed":[],"desc":"Specifies the duration of self certificates generated and signed by cephadm root CA","long_desc":"","tags":[],"see_also":[]},"certificate_renewal_threshold_days":{"name":"certificate_renewal_threshold_days","type":"int","level":"advanced","flags":0,"default_value":"30","min":"10","max":"90","enum_allowed":[],"desc":"Specifies the lead time in days to initiate certificate renewal before expiration.","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.28.1","min":"","max":"","enum_allowed":[],"desc":"Alertmanager container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"Elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:12.2.0","min":"","max":"","enum_allowed":[],"desc":"Grafana container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"Haproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_nginx":{"name":"container_image_nginx","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nginx:sclorg-nginx-126","min":"","max":"","enum_allowed":[],"desc":"Nginx container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.9.1","min":"","max":"","enum_allowed":[],"desc":"Node exporter container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.5","min":"","max":"","enum_allowed":[],"desc":"Nvmeof container image","long_desc":"","tags":[],"see_also":[]},"container_image_oauth2_proxy":{"name":"container_image_oauth2_proxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/oauth2-proxy/oauth2-proxy:v7.6.0","min":"","max":"","enum_allowed":[],"desc":"Oauth2 proxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v3.6.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba_metrics":{"name":"container_image_samba_metrics","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-metrics:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba metrics container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"Snmp gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"stray_daemon_check_interval":{"name":"stray_daemon_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"how frequently cephadm should check for the presence of stray daemons","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MANAGED_BY_CLUSTERS":{"name":"MANAGED_BY_CLUSTERS","type":"str","level":"advanced","flags":0,"default_value":"[]","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MULTICLUSTER_CONFIG":{"name":"MULTICLUSTER_CONFIG","type":"str","level":"advanced","flags":0,"default_value":"{}","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_HOSTNAME_PER_DAEMON":{"name":"RGW_HOSTNAME_PER_DAEMON","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"UNSAFE_TLS_v1_2":{"name":"UNSAFE_TLS_v1_2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sso_oauth2":{"name":"sso_oauth2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":true,"error_string":"","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"prometheus_tls_secret_name":{"name":"prometheus_tls_secret_name","type":"str","level":"advanced","flags":0,"default_value":"rook-ceph-prometheus-server-tls","min":"","max":"","enum_allowed":[],"desc":"name of tls secret in k8s for prometheus","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"smb","can_run":true,"error_string":"","module_options":{"internal_store_backend":{"name":"internal_store_backend","type":"str","level":"dev","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"set internal store backend. for develoment and testing only","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_orchestration":{"name":"update_orchestration","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically update orchestration when smb resources are changed","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_cloning":{"name":"pause_cloning","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_purging":{"name":"pause_purging","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous subvolume purge threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["iostat","nfs"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to, use commas to separate multiple","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"certificate_automated_rotation_enabled":{"name":"certificate_automated_rotation_enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"This flag controls whether cephadm automatically rotates certificates upon expiration.","long_desc":"","tags":[],"see_also":[]},"certificate_check_debug_mode":{"name":"certificate_check_debug_mode","type":"bool","level":"dev","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"FOR TESTING ONLY: This flag forces the certificate check instead of waiting for certificate_check_period.","long_desc":"","tags":[],"see_also":[]},"certificate_check_period":{"name":"certificate_check_period","type":"int","level":"advanced","flags":0,"default_value":"1","min":"0","max":"30","enum_allowed":[],"desc":"Specifies how often (in days) the certificate should be checked for validity.","long_desc":"","tags":[],"see_also":[]},"certificate_duration_days":{"name":"certificate_duration_days","type":"int","level":"advanced","flags":0,"default_value":"1095","min":"90","max":"3650","enum_allowed":[],"desc":"Specifies the duration of self certificates generated and signed by cephadm root CA","long_desc":"","tags":[],"see_also":[]},"certificate_renewal_threshold_days":{"name":"certificate_renewal_threshold_days","type":"int","level":"advanced","flags":0,"default_value":"30","min":"10","max":"90","enum_allowed":[],"desc":"Specifies the lead time in days to initiate certificate renewal before expiration.","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.28.1","min":"","max":"","enum_allowed":[],"desc":"Alertmanager container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"Elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:12.2.0","min":"","max":"","enum_allowed":[],"desc":"Grafana container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"Haproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_nginx":{"name":"container_image_nginx","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nginx:sclorg-nginx-126","min":"","max":"","enum_allowed":[],"desc":"Nginx container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.9.1","min":"","max":"","enum_allowed":[],"desc":"Node exporter container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.5","min":"","max":"","enum_allowed":[],"desc":"Nvmeof container image","long_desc":"","tags":[],"see_also":[]},"container_image_oauth2_proxy":{"name":"container_image_oauth2_proxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/oauth2-proxy/oauth2-proxy:v7.6.0","min":"","max":"","enum_allowed":[],"desc":"Oauth2 proxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v3.6.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba_metrics":{"name":"container_image_samba_metrics","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-metrics:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba metrics container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"Snmp gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"stray_daemon_check_interval":{"name":"stray_daemon_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"how frequently cephadm should check for the presence of stray daemons","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MANAGED_BY_CLUSTERS":{"name":"MANAGED_BY_CLUSTERS","type":"str","level":"advanced","flags":0,"default_value":"[]","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MULTICLUSTER_CONFIG":{"name":"MULTICLUSTER_CONFIG","type":"str","level":"advanced","flags":0,"default_value":"{}","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_HOSTNAME_PER_DAEMON":{"name":"RGW_HOSTNAME_PER_DAEMON","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"UNSAFE_TLS_v1_2":{"name":"UNSAFE_TLS_v1_2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sso_oauth2":{"name":"sso_oauth2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":true,"error_string":"","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"prometheus_tls_secret_name":{"name":"prometheus_tls_secret_name","type":"str","level":"advanced","flags":0,"default_value":"rook-ceph-prometheus-server-tls","min":"","max":"","enum_allowed":[],"desc":"name of tls secret in k8s for prometheus","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"smb","can_run":true,"error_string":"","module_options":{"internal_store_backend":{"name":"internal_store_backend","type":"str","level":"dev","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"set internal store backend. for develoment and testing only","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_orchestration":{"name":"update_orchestration","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically update orchestration when smb resources are changed","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_cloning":{"name":"pause_cloning","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_purging":{"name":"pause_purging","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous subvolume purge threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"tentacle":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":0,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":1134499686}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":685208518}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":2753007161}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":1585646452}]}]} 2026-04-01T09:53:12.470 INFO:tasks.ceph.ceph_manager.ceph:mgr available! 2026-04-01T09:53:12.470 INFO:tasks.ceph.ceph_manager.ceph:waiting for all up 2026-04-01T09:53:12.470 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-04-01T09:53:12.709 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:53:12.710 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":17,"fsid":"9d6defe0-49c7-414d-80c7-4853a2cdd635","created":"2026-04-01T09:52:57.866853+0000","modified":"2026-04-01T09:53:11.899755+0000","last_up_change":"2026-04-01T09:53:06.861873+0000","last_in_change":"2026-04-01T09:52:59.643938+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":2,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":"rbd","create_time":"2026-04-01T09:53:08.045900+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":8,"pg_placement_num":8,"pg_placement_num_target":8,"pg_num_target":8,"pg_num_pending":8,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"17","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":17,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2,"score_stable":2,"optimal_score":1,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":".mgr","create_time":"2026-04-01T09:53:08.557996+0000","flags":1,"flags_names":"hashpspool","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"16","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":8,"score_stable":8,"optimal_score":0.25,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"03a36eec-286a-4abd-b74f-adadfd5f6866","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":404159356},{"type":"v1","addr":"192.168.123.100:6817","nonce":404159356}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":404159356},{"type":"v1","addr":"192.168.123.100:6819","nonce":404159356}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":404159356},{"type":"v1","addr":"192.168.123.100:6823","nonce":404159356}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":404159356},{"type":"v1","addr":"192.168.123.100:6821","nonce":404159356}]},"public_addr":"192.168.123.100:6817/404159356","cluster_addr":"192.168.123.100:6819/404159356","heartbeat_back_addr":"192.168.123.100:6823/404159356","heartbeat_front_addr":"192.168.123.100:6821/404159356","state":["exists","up"]},{"osd":1,"uuid":"14fafcf0-3ba7-491c-8da7-5f87cfc5dd98","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":14,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":3051551160},{"type":"v1","addr":"192.168.123.100:6825","nonce":3051551160}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":3051551160},{"type":"v1","addr":"192.168.123.100:6827","nonce":3051551160}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":3051551160},{"type":"v1","addr":"192.168.123.100:6831","nonce":3051551160}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":3051551160},{"type":"v1","addr":"192.168.123.100:6829","nonce":3051551160}]},"public_addr":"192.168.123.100:6825/3051551160","cluster_addr":"192.168.123.100:6827/3051551160","heartbeat_back_addr":"192.168.123.100:6831/3051551160","heartbeat_front_addr":"192.168.123.100:6829/3051551160","state":["exists","up"]},{"osd":2,"uuid":"8d7430ae-17ba-409a-876e-94433116c19a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6803","nonce":809661050},{"type":"v1","addr":"192.168.123.100:6805","nonce":809661050}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6807","nonce":809661050},{"type":"v1","addr":"192.168.123.100:6809","nonce":809661050}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":809661050},{"type":"v1","addr":"192.168.123.100:6815","nonce":809661050}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6811","nonce":809661050},{"type":"v1","addr":"192.168.123.100:6813","nonce":809661050}]},"public_addr":"192.168.123.100:6805/809661050","cluster_addr":"192.168.123.100:6809/809661050","heartbeat_back_addr":"192.168.123.100:6815/809661050","heartbeat_front_addr":"192.168.123.100:6813/809661050","state":["exists","up"]},{"osd":3,"uuid":"b977fa81-bf62-4126-b820-2f6b40ca2a3a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":14,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6800","nonce":248593229},{"type":"v1","addr":"192.168.123.100:6801","nonce":248593229}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":248593229},{"type":"v1","addr":"192.168.123.100:6804","nonce":248593229}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":248593229},{"type":"v1","addr":"192.168.123.100:6812","nonce":248593229}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":248593229},{"type":"v1","addr":"192.168.123.100:6808","nonce":248593229}]},"public_addr":"192.168.123.100:6801/248593229","cluster_addr":"192.168.123.100:6804/248593229","heartbeat_back_addr":"192.168.123.100:6812/248593229","heartbeat_front_addr":"192.168.123.100:6808/248593229","state":["exists","up"]},{"osd":4,"uuid":"e0357523-5fe6-45a1-8098-ff2effc845fb","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":14,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6800","nonce":1269558049},{"type":"v1","addr":"192.168.123.103:6801","nonce":1269558049}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":1269558049},{"type":"v1","addr":"192.168.123.103:6803","nonce":1269558049}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":1269558049},{"type":"v1","addr":"192.168.123.103:6807","nonce":1269558049}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":1269558049},{"type":"v1","addr":"192.168.123.103:6805","nonce":1269558049}]},"public_addr":"192.168.123.103:6801/1269558049","cluster_addr":"192.168.123.103:6803/1269558049","heartbeat_back_addr":"192.168.123.103:6807/1269558049","heartbeat_front_addr":"192.168.123.103:6805/1269558049","state":["exists","up"]},{"osd":5,"uuid":"73a6b698-0034-4df8-9d43-7d94523b0b3d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":14,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6818","nonce":2758364680},{"type":"v1","addr":"192.168.123.103:6820","nonce":2758364680}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6822","nonce":2758364680},{"type":"v1","addr":"192.168.123.103:6824","nonce":2758364680}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6829","nonce":2758364680},{"type":"v1","addr":"192.168.123.103:6831","nonce":2758364680}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6826","nonce":2758364680},{"type":"v1","addr":"192.168.123.103:6827","nonce":2758364680}]},"public_addr":"192.168.123.103:6820/2758364680","cluster_addr":"192.168.123.103:6824/2758364680","heartbeat_back_addr":"192.168.123.103:6831/2758364680","heartbeat_front_addr":"192.168.123.103:6827/2758364680","state":["exists","up"]},{"osd":6,"uuid":"55819012-c4f8-492c-b942-f50004304417","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":14,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":1036159278},{"type":"v1","addr":"192.168.123.103:6809","nonce":1036159278}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":1036159278},{"type":"v1","addr":"192.168.123.103:6811","nonce":1036159278}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":1036159278},{"type":"v1","addr":"192.168.123.103:6815","nonce":1036159278}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":1036159278},{"type":"v1","addr":"192.168.123.103:6813","nonce":1036159278}]},"public_addr":"192.168.123.103:6809/1036159278","cluster_addr":"192.168.123.103:6811/1036159278","heartbeat_back_addr":"192.168.123.103:6815/1036159278","heartbeat_front_addr":"192.168.123.103:6813/1036159278","state":["exists","up"]},{"osd":7,"uuid":"c473149c-9ca7-4e55-a8f0-10087620988f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":14,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6816","nonce":994620887},{"type":"v1","addr":"192.168.123.103:6817","nonce":994620887}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6819","nonce":994620887},{"type":"v1","addr":"192.168.123.103:6821","nonce":994620887}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6828","nonce":994620887},{"type":"v1","addr":"192.168.123.103:6830","nonce":994620887}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6823","nonce":994620887},{"type":"v1","addr":"192.168.123.103:6825","nonce":994620887}]},"public_addr":"192.168.123.103:6817/994620887","cluster_addr":"192.168.123.103:6821/994620887","heartbeat_back_addr":"192.168.123.103:6830/994620887","heartbeat_front_addr":"192.168.123.103:6825/994620887","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":1,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[{"pool":1,"snaps":[{"begin":2,"length":1}]}],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-04-01T09:53:12.722 INFO:tasks.ceph.ceph_manager.ceph:all up! 2026-04-01T09:53:12.722 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-04-01T09:53:12.956 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:53:12.956 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":17,"fsid":"9d6defe0-49c7-414d-80c7-4853a2cdd635","created":"2026-04-01T09:52:57.866853+0000","modified":"2026-04-01T09:53:11.899755+0000","last_up_change":"2026-04-01T09:53:06.861873+0000","last_in_change":"2026-04-01T09:52:59.643938+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":2,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":"rbd","create_time":"2026-04-01T09:53:08.045900+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":8,"pg_placement_num":8,"pg_placement_num_target":8,"pg_num_target":8,"pg_num_pending":8,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"17","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":17,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2,"score_stable":2,"optimal_score":1,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":".mgr","create_time":"2026-04-01T09:53:08.557996+0000","flags":1,"flags_names":"hashpspool","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"16","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":8,"score_stable":8,"optimal_score":0.25,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"03a36eec-286a-4abd-b74f-adadfd5f6866","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":404159356},{"type":"v1","addr":"192.168.123.100:6817","nonce":404159356}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":404159356},{"type":"v1","addr":"192.168.123.100:6819","nonce":404159356}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":404159356},{"type":"v1","addr":"192.168.123.100:6823","nonce":404159356}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":404159356},{"type":"v1","addr":"192.168.123.100:6821","nonce":404159356}]},"public_addr":"192.168.123.100:6817/404159356","cluster_addr":"192.168.123.100:6819/404159356","heartbeat_back_addr":"192.168.123.100:6823/404159356","heartbeat_front_addr":"192.168.123.100:6821/404159356","state":["exists","up"]},{"osd":1,"uuid":"14fafcf0-3ba7-491c-8da7-5f87cfc5dd98","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":14,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":3051551160},{"type":"v1","addr":"192.168.123.100:6825","nonce":3051551160}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":3051551160},{"type":"v1","addr":"192.168.123.100:6827","nonce":3051551160}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":3051551160},{"type":"v1","addr":"192.168.123.100:6831","nonce":3051551160}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":3051551160},{"type":"v1","addr":"192.168.123.100:6829","nonce":3051551160}]},"public_addr":"192.168.123.100:6825/3051551160","cluster_addr":"192.168.123.100:6827/3051551160","heartbeat_back_addr":"192.168.123.100:6831/3051551160","heartbeat_front_addr":"192.168.123.100:6829/3051551160","state":["exists","up"]},{"osd":2,"uuid":"8d7430ae-17ba-409a-876e-94433116c19a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6803","nonce":809661050},{"type":"v1","addr":"192.168.123.100:6805","nonce":809661050}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6807","nonce":809661050},{"type":"v1","addr":"192.168.123.100:6809","nonce":809661050}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":809661050},{"type":"v1","addr":"192.168.123.100:6815","nonce":809661050}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6811","nonce":809661050},{"type":"v1","addr":"192.168.123.100:6813","nonce":809661050}]},"public_addr":"192.168.123.100:6805/809661050","cluster_addr":"192.168.123.100:6809/809661050","heartbeat_back_addr":"192.168.123.100:6815/809661050","heartbeat_front_addr":"192.168.123.100:6813/809661050","state":["exists","up"]},{"osd":3,"uuid":"b977fa81-bf62-4126-b820-2f6b40ca2a3a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":14,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6800","nonce":248593229},{"type":"v1","addr":"192.168.123.100:6801","nonce":248593229}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":248593229},{"type":"v1","addr":"192.168.123.100:6804","nonce":248593229}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":248593229},{"type":"v1","addr":"192.168.123.100:6812","nonce":248593229}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":248593229},{"type":"v1","addr":"192.168.123.100:6808","nonce":248593229}]},"public_addr":"192.168.123.100:6801/248593229","cluster_addr":"192.168.123.100:6804/248593229","heartbeat_back_addr":"192.168.123.100:6812/248593229","heartbeat_front_addr":"192.168.123.100:6808/248593229","state":["exists","up"]},{"osd":4,"uuid":"e0357523-5fe6-45a1-8098-ff2effc845fb","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":14,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6800","nonce":1269558049},{"type":"v1","addr":"192.168.123.103:6801","nonce":1269558049}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":1269558049},{"type":"v1","addr":"192.168.123.103:6803","nonce":1269558049}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":1269558049},{"type":"v1","addr":"192.168.123.103:6807","nonce":1269558049}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":1269558049},{"type":"v1","addr":"192.168.123.103:6805","nonce":1269558049}]},"public_addr":"192.168.123.103:6801/1269558049","cluster_addr":"192.168.123.103:6803/1269558049","heartbeat_back_addr":"192.168.123.103:6807/1269558049","heartbeat_front_addr":"192.168.123.103:6805/1269558049","state":["exists","up"]},{"osd":5,"uuid":"73a6b698-0034-4df8-9d43-7d94523b0b3d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":14,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6818","nonce":2758364680},{"type":"v1","addr":"192.168.123.103:6820","nonce":2758364680}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6822","nonce":2758364680},{"type":"v1","addr":"192.168.123.103:6824","nonce":2758364680}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6829","nonce":2758364680},{"type":"v1","addr":"192.168.123.103:6831","nonce":2758364680}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6826","nonce":2758364680},{"type":"v1","addr":"192.168.123.103:6827","nonce":2758364680}]},"public_addr":"192.168.123.103:6820/2758364680","cluster_addr":"192.168.123.103:6824/2758364680","heartbeat_back_addr":"192.168.123.103:6831/2758364680","heartbeat_front_addr":"192.168.123.103:6827/2758364680","state":["exists","up"]},{"osd":6,"uuid":"55819012-c4f8-492c-b942-f50004304417","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":14,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":1036159278},{"type":"v1","addr":"192.168.123.103:6809","nonce":1036159278}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":1036159278},{"type":"v1","addr":"192.168.123.103:6811","nonce":1036159278}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":1036159278},{"type":"v1","addr":"192.168.123.103:6815","nonce":1036159278}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":1036159278},{"type":"v1","addr":"192.168.123.103:6813","nonce":1036159278}]},"public_addr":"192.168.123.103:6809/1036159278","cluster_addr":"192.168.123.103:6811/1036159278","heartbeat_back_addr":"192.168.123.103:6815/1036159278","heartbeat_front_addr":"192.168.123.103:6813/1036159278","state":["exists","up"]},{"osd":7,"uuid":"c473149c-9ca7-4e55-a8f0-10087620988f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":14,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6816","nonce":994620887},{"type":"v1","addr":"192.168.123.103:6817","nonce":994620887}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6819","nonce":994620887},{"type":"v1","addr":"192.168.123.103:6821","nonce":994620887}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6828","nonce":994620887},{"type":"v1","addr":"192.168.123.103:6830","nonce":994620887}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6823","nonce":994620887},{"type":"v1","addr":"192.168.123.103:6825","nonce":994620887}]},"public_addr":"192.168.123.103:6817/994620887","cluster_addr":"192.168.123.103:6821/994620887","heartbeat_back_addr":"192.168.123.103:6830/994620887","heartbeat_front_addr":"192.168.123.103:6825/994620887","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":1,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[{"pool":1,"snaps":[{"begin":2,"length":1}]}],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-04-01T09:53:12.971 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.0 flush_pg_stats 2026-04-01T09:53:12.971 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.1 flush_pg_stats 2026-04-01T09:53:12.971 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.2 flush_pg_stats 2026-04-01T09:53:12.972 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.3 flush_pg_stats 2026-04-01T09:53:12.972 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.4 flush_pg_stats 2026-04-01T09:53:12.972 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.5 flush_pg_stats 2026-04-01T09:53:12.972 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.6 flush_pg_stats 2026-04-01T09:53:12.972 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.7 flush_pg_stats 2026-04-01T09:53:13.201 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-04-01T09:53:13.201 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.5 2026-04-01T09:53:13.229 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-04-01T09:53:13.229 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.2 2026-04-01T09:53:13.230 INFO:teuthology.orchestra.run.vm00.stdout:55834574852 2026-04-01T09:53:13.230 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.3 2026-04-01T09:53:13.240 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-04-01T09:53:13.240 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.4 2026-04-01T09:53:13.263 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-04-01T09:53:13.264 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.0 2026-04-01T09:53:13.267 INFO:teuthology.orchestra.run.vm00.stdout:55834574852 2026-04-01T09:53:13.268 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.7 2026-04-01T09:53:13.320 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-04-01T09:53:13.321 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.1 2026-04-01T09:53:13.341 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-04-01T09:53:13.341 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.6 2026-04-01T09:53:13.602 INFO:teuthology.orchestra.run.vm00.stdout:55834574850 2026-04-01T09:53:13.625 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574850 for osd.5 2026-04-01T09:53:13.731 INFO:teuthology.orchestra.run.vm00.stdout:55834574850 2026-04-01T09:53:13.746 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574850 for osd.4 2026-04-01T09:53:13.747 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-04-01T09:53:13.754 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-04-01T09:53:13.758 INFO:teuthology.orchestra.run.vm00.stdout:55834574850 2026-04-01T09:53:13.771 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574852 got 55834574851 for osd.7 2026-04-01T09:53:13.787 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574850 for osd.2 2026-04-01T09:53:13.790 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574852 got 55834574851 for osd.3 2026-04-01T09:53:13.796 INFO:teuthology.orchestra.run.vm00.stdout:55834574850 2026-04-01T09:53:13.805 INFO:teuthology.orchestra.run.vm00.stdout:55834574850 2026-04-01T09:53:13.815 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574850 for osd.1 2026-04-01T09:53:13.821 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574850 for osd.0 2026-04-01T09:53:13.870 INFO:teuthology.orchestra.run.vm00.stdout:55834574850 2026-04-01T09:53:13.885 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574850 for osd.6 2026-04-01T09:53:14.626 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.5 2026-04-01T09:53:14.746 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.4 2026-04-01T09:53:14.772 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.7 2026-04-01T09:53:14.788 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.2 2026-04-01T09:53:14.790 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.3 2026-04-01T09:53:14.816 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.1 2026-04-01T09:53:14.821 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.0 2026-04-01T09:53:14.876 INFO:teuthology.orchestra.run.vm00.stdout:55834574850 2026-04-01T09:53:14.885 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.6 2026-04-01T09:53:14.915 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574850 for osd.5 2026-04-01T09:53:15.110 INFO:teuthology.orchestra.run.vm00.stdout:55834574852 2026-04-01T09:53:15.129 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574852 got 55834574852 for osd.3 2026-04-01T09:53:15.129 DEBUG:teuthology.parallel:result is None 2026-04-01T09:53:15.165 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-04-01T09:53:15.188 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574851 for osd.4 2026-04-01T09:53:15.188 DEBUG:teuthology.parallel:result is None 2026-04-01T09:53:15.250 INFO:teuthology.orchestra.run.vm00.stdout:55834574852 2026-04-01T09:53:15.262 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-04-01T09:53:15.270 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574852 got 55834574852 for osd.7 2026-04-01T09:53:15.271 DEBUG:teuthology.parallel:result is None 2026-04-01T09:53:15.276 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-04-01T09:53:15.280 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574851 for osd.2 2026-04-01T09:53:15.280 DEBUG:teuthology.parallel:result is None 2026-04-01T09:53:15.293 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574851 for osd.0 2026-04-01T09:53:15.293 DEBUG:teuthology.parallel:result is None 2026-04-01T09:53:15.305 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-04-01T09:53:15.319 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574851 for osd.1 2026-04-01T09:53:15.319 DEBUG:teuthology.parallel:result is None 2026-04-01T09:53:15.353 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-04-01T09:53:15.365 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574851 for osd.6 2026-04-01T09:53:15.366 DEBUG:teuthology.parallel:result is None 2026-04-01T09:53:15.916 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.5 2026-04-01T09:53:16.147 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-04-01T09:53:16.160 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574851 for osd.5 2026-04-01T09:53:16.160 DEBUG:teuthology.parallel:result is None 2026-04-01T09:53:16.160 INFO:tasks.ceph.ceph_manager.ceph:waiting for clean 2026-04-01T09:53:16.160 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-04-01T09:53:16.382 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:53:16.382 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-04-01T09:53:16.396 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":14,"stamp":"2026-04-01T09:53:14.539634+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459299,"num_objects":4,"num_object_clones":0,"num_object_copies":8,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":59,"num_write_kb":586,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":35,"ondisk_log_size":35,"up":18,"acting":18,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":18,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":754974720,"kb_used":217328,"kb_used_data":2224,"kb_used_omap":53,"kb_used_meta":214474,"kb_avail":754757392,"statfs":{"total":773094113280,"available":772871569408,"internally_reserved":0,"allocated":2277376,"data_stored":1343718,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":54607,"internal_metadata":219622065},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"2.000299"},"pg_stats":[{"pgid":"1.7","version":"0'0","reported_seq":20,"reported_epoch":17,"state":"active+clean","last_fresh":"2026-04-01T09:53:12.127836+0000","last_change":"2026-04-01T09:53:12.127925+0000","last_active":"2026-04-01T09:53:12.127836+0000","last_peered":"2026-04-01T09:53:12.127836+0000","last_clean":"2026-04-01T09:53:12.127836+0000","last_became_active":"2026-04-01T09:53:09.898562+0000","last_became_peered":"2026-04-01T09:53:09.898562+0000","last_unstale":"2026-04-01T09:53:12.127836+0000","last_undegraded":"2026-04-01T09:53:12.127836+0000","last_fullsized":"2026-04-01T09:53:12.127836+0000","mapping_epoch":14,"log_start":"0'0","ondisk_log_start":"0'0","created":14,"last_epoch_clean":15,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_clean_scrub_stamp":"2026-04-01T09:53:08.882862+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T14:00:14.397807+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00024032,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,3],"acting":[6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.6","version":"0'0","reported_seq":20,"reported_epoch":17,"state":"active+clean","last_fresh":"2026-04-01T09:53:12.127304+0000","last_change":"2026-04-01T09:53:12.127786+0000","last_active":"2026-04-01T09:53:12.127304+0000","last_peered":"2026-04-01T09:53:12.127304+0000","last_clean":"2026-04-01T09:53:12.127304+0000","last_became_active":"2026-04-01T09:53:10.249226+0000","last_became_peered":"2026-04-01T09:53:10.249226+0000","last_unstale":"2026-04-01T09:53:12.127304+0000","last_undegraded":"2026-04-01T09:53:12.127304+0000","last_fullsized":"2026-04-01T09:53:12.127304+0000","mapping_epoch":14,"log_start":"0'0","ondisk_log_start":"0'0","created":14,"last_epoch_clean":15,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_clean_scrub_stamp":"2026-04-01T09:53:08.882862+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T12:02:35.713359+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00064587900000000003,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0],"acting":[6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.5","version":"0'0","reported_seq":20,"reported_epoch":17,"state":"active+clean","last_fresh":"2026-04-01T09:53:12.126812+0000","last_change":"2026-04-01T09:53:12.126903+0000","last_active":"2026-04-01T09:53:12.126812+0000","last_peered":"2026-04-01T09:53:12.126812+0000","last_clean":"2026-04-01T09:53:12.126812+0000","last_became_active":"2026-04-01T09:53:09.899296+0000","last_became_peered":"2026-04-01T09:53:09.899296+0000","last_unstale":"2026-04-01T09:53:12.126812+0000","last_undegraded":"2026-04-01T09:53:12.126812+0000","last_fullsized":"2026-04-01T09:53:12.126812+0000","mapping_epoch":14,"log_start":"0'0","ondisk_log_start":"0'0","created":14,"last_epoch_clean":15,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_clean_scrub_stamp":"2026-04-01T09:53:08.882862+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T19:37:45.785458+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000178164,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2],"acting":[4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.4","version":"0'0","reported_seq":20,"reported_epoch":17,"state":"active+clean","last_fresh":"2026-04-01T09:53:12.127196+0000","last_change":"2026-04-01T09:53:12.127315+0000","last_active":"2026-04-01T09:53:12.127196+0000","last_peered":"2026-04-01T09:53:12.127196+0000","last_clean":"2026-04-01T09:53:12.127196+0000","last_became_active":"2026-04-01T09:53:09.899169+0000","last_became_peered":"2026-04-01T09:53:09.899169+0000","last_unstale":"2026-04-01T09:53:12.127196+0000","last_undegraded":"2026-04-01T09:53:12.127196+0000","last_fullsized":"2026-04-01T09:53:12.127196+0000","mapping_epoch":14,"log_start":"0'0","ondisk_log_start":"0'0","created":14,"last_epoch_clean":15,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_clean_scrub_stamp":"2026-04-01T09:53:08.882862+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T12:02:35.713359+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00031563099999999997,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5],"acting":[3,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.0","version":"15'32","reported_seq":59,"reported_epoch":17,"state":"active+clean","last_fresh":"2026-04-01T09:53:11.905405+0000","last_change":"2026-04-01T09:53:09.899442+0000","last_active":"2026-04-01T09:53:11.905405+0000","last_peered":"2026-04-01T09:53:11.905405+0000","last_clean":"2026-04-01T09:53:11.905405+0000","last_became_active":"2026-04-01T09:53:09.899253+0000","last_became_peered":"2026-04-01T09:53:09.899253+0000","last_unstale":"2026-04-01T09:53:11.905405+0000","last_undegraded":"2026-04-01T09:53:11.905405+0000","last_fullsized":"2026-04-01T09:53:11.905405+0000","mapping_epoch":14,"log_start":"0'0","ondisk_log_start":"0'0","created":14,"last_epoch_clean":15,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_clean_scrub_stamp":"2026-04-01T09:53:08.882862+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T16:36:21.254268+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1],"acting":[7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"1.3","version":"15'1","reported_seq":21,"reported_epoch":17,"state":"active+clean","last_fresh":"2026-04-01T09:53:11.905410+0000","last_change":"2026-04-01T09:53:11.905410+0000","last_active":"2026-04-01T09:53:11.905410+0000","last_peered":"2026-04-01T09:53:11.905410+0000","last_clean":"2026-04-01T09:53:11.905410+0000","last_became_active":"2026-04-01T09:53:09.896854+0000","last_became_peered":"2026-04-01T09:53:09.896854+0000","last_unstale":"2026-04-01T09:53:11.905410+0000","last_undegraded":"2026-04-01T09:53:11.905410+0000","last_fullsized":"2026-04-01T09:53:11.905410+0000","mapping_epoch":14,"log_start":"0'0","ondisk_log_start":"0'0","created":14,"last_epoch_clean":15,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_clean_scrub_stamp":"2026-04-01T09:53:08.882862+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T11:48:49.915142+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00021853899999999999,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6],"acting":[1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.2","version":"17'2","reported_seq":22,"reported_epoch":17,"state":"active+clean","last_fresh":"2026-04-01T09:53:11.907669+0000","last_change":"2026-04-01T09:53:11.905867+0000","last_active":"2026-04-01T09:53:11.907669+0000","last_peered":"2026-04-01T09:53:11.907669+0000","last_clean":"2026-04-01T09:53:11.907669+0000","last_became_active":"2026-04-01T09:53:10.250223+0000","last_became_peered":"2026-04-01T09:53:10.250223+0000","last_unstale":"2026-04-01T09:53:11.907669+0000","last_undegraded":"2026-04-01T09:53:11.907669+0000","last_fullsized":"2026-04-01T09:53:11.907669+0000","mapping_epoch":14,"log_start":"0'0","ondisk_log_start":"0'0","created":14,"last_epoch_clean":15,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_clean_scrub_stamp":"2026-04-01T09:53:08.882862+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T21:50:52.206073+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00037328799999999999,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0],"acting":[7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.1","version":"0'0","reported_seq":20,"reported_epoch":17,"state":"active+clean","last_fresh":"2026-04-01T09:53:12.126690+0000","last_change":"2026-04-01T09:53:12.126878+0000","last_active":"2026-04-01T09:53:12.126690+0000","last_peered":"2026-04-01T09:53:12.126690+0000","last_clean":"2026-04-01T09:53:12.126690+0000","last_became_active":"2026-04-01T09:53:10.249822+0000","last_became_peered":"2026-04-01T09:53:10.249822+0000","last_unstale":"2026-04-01T09:53:12.126690+0000","last_undegraded":"2026-04-01T09:53:12.126690+0000","last_fullsized":"2026-04-01T09:53:12.126690+0000","mapping_epoch":14,"log_start":"0'0","ondisk_log_start":"0'0","created":14,"last_epoch_clean":15,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_clean_scrub_stamp":"2026-04-01T09:53:08.882862+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T12:02:35.713359+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00027929300000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0],"acting":[5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.0","version":"0'0","reported_seq":20,"reported_epoch":17,"state":"active+clean","last_fresh":"2026-04-01T09:53:11.905726+0000","last_change":"2026-04-01T09:53:11.905726+0000","last_active":"2026-04-01T09:53:11.905726+0000","last_peered":"2026-04-01T09:53:11.905726+0000","last_clean":"2026-04-01T09:53:11.905726+0000","last_became_active":"2026-04-01T09:53:10.250563+0000","last_became_peered":"2026-04-01T09:53:10.250563+0000","last_unstale":"2026-04-01T09:53:11.905726+0000","last_undegraded":"2026-04-01T09:53:11.905726+0000","last_fullsized":"2026-04-01T09:53:11.905726+0000","mapping_epoch":14,"log_start":"0'0","ondisk_log_start":"0'0","created":14,"last_epoch_clean":15,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_clean_scrub_stamp":"2026-04-01T09:53:08.882862+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T13:25:08.529486+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00025401599999999997,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0],"acting":[7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[{"start":"2","length":"1"}]}],"pool_stats":[{"poolid":2,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":925696,"data_stored":918560,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":2,"acting":2,"num_store_stats":2},{"poolid":1,"num_pg":8,"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":38,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":3,"ondisk_log_size":3,"up":16,"acting":16,"num_store_stats":8}],"osd_stats":[{"osd":7,"up_from":13,"seq":55834574852,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27604,"kb_used_data":620,"kb_used_omap":6,"kb_used_meta":26809,"kb_avail":94344236,"statfs":{"total":96636764160,"available":96608497664,"internally_reserved":0,"allocated":634880,"data_stored":512439,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":6822,"internal_metadata":27452762},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":13,"seq":55834574851,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27020,"kb_used_data":164,"kb_used_omap":6,"kb_used_meta":26809,"kb_avail":94344820,"statfs":{"total":96636764160,"available":96609095680,"internally_reserved":0,"allocated":167936,"data_stored":53140,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":6826,"internal_metadata":27452758},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":13,"seq":55834574851,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27020,"kb_used_data":164,"kb_used_omap":6,"kb_used_meta":26809,"kb_avail":94344820,"statfs":{"total":96636764160,"available":96609095680,"internally_reserved":0,"allocated":167936,"data_stored":53140,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":6827,"internal_metadata":27452757},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":13,"seq":55834574851,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27020,"kb_used_data":164,"kb_used_omap":6,"kb_used_meta":26809,"kb_avail":94344820,"statfs":{"total":96636764160,"available":96609095680,"internally_reserved":0,"allocated":167936,"data_stored":53140,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":6827,"internal_metadata":27452757},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":13,"seq":55834574852,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27020,"kb_used_data":164,"kb_used_omap":6,"kb_used_meta":26809,"kb_avail":94344820,"statfs":{"total":96636764160,"available":96609095680,"internally_reserved":0,"allocated":167936,"data_stored":53140,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":6825,"internal_metadata":27452759},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]},{"osd":2,"up_from":13,"seq":55834574851,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27020,"kb_used_data":164,"kb_used_omap":6,"kb_used_meta":26809,"kb_avail":94344820,"statfs":{"total":96636764160,"available":96609095680,"internally_reserved":0,"allocated":167936,"data_stored":53140,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":6827,"internal_metadata":27452757},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574851,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27600,"kb_used_data":616,"kb_used_omap":6,"kb_used_meta":26809,"kb_avail":94344240,"statfs":{"total":96636764160,"available":96608501760,"internally_reserved":0,"allocated":630784,"data_stored":512420,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":6827,"internal_metadata":27452757},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":13,"seq":55834574851,"num_pgs":4,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27024,"kb_used_data":168,"kb_used_omap":6,"kb_used_meta":26809,"kb_avail":94344816,"statfs":{"total":96636764160,"available":96609091584,"internally_reserved":0,"allocated":172032,"data_stored":53159,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":6826,"internal_metadata":27452758},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-04-01T09:53:16.396 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-04-01T09:53:16.607 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:53:16.608 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-04-01T09:53:16.620 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":15,"stamp":"2026-04-01T09:53:16.539981+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459299,"num_objects":4,"num_object_clones":0,"num_object_copies":8,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":59,"num_write_kb":586,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":35,"ondisk_log_size":35,"up":18,"acting":18,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":18,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":754974720,"kb_used":217328,"kb_used_data":2224,"kb_used_omap":53,"kb_used_meta":214474,"kb_avail":754757392,"statfs":{"total":773094113280,"available":772871569408,"internally_reserved":0,"allocated":2277376,"data_stored":1343718,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":54607,"internal_metadata":219622065},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"4.000646"},"pg_stats":[{"pgid":"1.7","version":"0'0","reported_seq":20,"reported_epoch":17,"state":"active+clean","last_fresh":"2026-04-01T09:53:12.127836+0000","last_change":"2026-04-01T09:53:12.127925+0000","last_active":"2026-04-01T09:53:12.127836+0000","last_peered":"2026-04-01T09:53:12.127836+0000","last_clean":"2026-04-01T09:53:12.127836+0000","last_became_active":"2026-04-01T09:53:09.898562+0000","last_became_peered":"2026-04-01T09:53:09.898562+0000","last_unstale":"2026-04-01T09:53:12.127836+0000","last_undegraded":"2026-04-01T09:53:12.127836+0000","last_fullsized":"2026-04-01T09:53:12.127836+0000","mapping_epoch":14,"log_start":"0'0","ondisk_log_start":"0'0","created":14,"last_epoch_clean":15,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_clean_scrub_stamp":"2026-04-01T09:53:08.882862+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T14:00:14.397807+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00024032,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,3],"acting":[6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.6","version":"0'0","reported_seq":20,"reported_epoch":17,"state":"active+clean","last_fresh":"2026-04-01T09:53:12.127304+0000","last_change":"2026-04-01T09:53:12.127786+0000","last_active":"2026-04-01T09:53:12.127304+0000","last_peered":"2026-04-01T09:53:12.127304+0000","last_clean":"2026-04-01T09:53:12.127304+0000","last_became_active":"2026-04-01T09:53:10.249226+0000","last_became_peered":"2026-04-01T09:53:10.249226+0000","last_unstale":"2026-04-01T09:53:12.127304+0000","last_undegraded":"2026-04-01T09:53:12.127304+0000","last_fullsized":"2026-04-01T09:53:12.127304+0000","mapping_epoch":14,"log_start":"0'0","ondisk_log_start":"0'0","created":14,"last_epoch_clean":15,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_clean_scrub_stamp":"2026-04-01T09:53:08.882862+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T12:02:35.713359+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00064587900000000003,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0],"acting":[6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.5","version":"0'0","reported_seq":20,"reported_epoch":17,"state":"active+clean","last_fresh":"2026-04-01T09:53:12.126812+0000","last_change":"2026-04-01T09:53:12.126903+0000","last_active":"2026-04-01T09:53:12.126812+0000","last_peered":"2026-04-01T09:53:12.126812+0000","last_clean":"2026-04-01T09:53:12.126812+0000","last_became_active":"2026-04-01T09:53:09.899296+0000","last_became_peered":"2026-04-01T09:53:09.899296+0000","last_unstale":"2026-04-01T09:53:12.126812+0000","last_undegraded":"2026-04-01T09:53:12.126812+0000","last_fullsized":"2026-04-01T09:53:12.126812+0000","mapping_epoch":14,"log_start":"0'0","ondisk_log_start":"0'0","created":14,"last_epoch_clean":15,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_clean_scrub_stamp":"2026-04-01T09:53:08.882862+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T19:37:45.785458+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000178164,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2],"acting":[4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.4","version":"0'0","reported_seq":20,"reported_epoch":17,"state":"active+clean","last_fresh":"2026-04-01T09:53:12.127196+0000","last_change":"2026-04-01T09:53:12.127315+0000","last_active":"2026-04-01T09:53:12.127196+0000","last_peered":"2026-04-01T09:53:12.127196+0000","last_clean":"2026-04-01T09:53:12.127196+0000","last_became_active":"2026-04-01T09:53:09.899169+0000","last_became_peered":"2026-04-01T09:53:09.899169+0000","last_unstale":"2026-04-01T09:53:12.127196+0000","last_undegraded":"2026-04-01T09:53:12.127196+0000","last_fullsized":"2026-04-01T09:53:12.127196+0000","mapping_epoch":14,"log_start":"0'0","ondisk_log_start":"0'0","created":14,"last_epoch_clean":15,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_clean_scrub_stamp":"2026-04-01T09:53:08.882862+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T12:02:35.713359+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00031563099999999997,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5],"acting":[3,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.0","version":"15'32","reported_seq":59,"reported_epoch":17,"state":"active+clean","last_fresh":"2026-04-01T09:53:11.905405+0000","last_change":"2026-04-01T09:53:09.899442+0000","last_active":"2026-04-01T09:53:11.905405+0000","last_peered":"2026-04-01T09:53:11.905405+0000","last_clean":"2026-04-01T09:53:11.905405+0000","last_became_active":"2026-04-01T09:53:09.899253+0000","last_became_peered":"2026-04-01T09:53:09.899253+0000","last_unstale":"2026-04-01T09:53:11.905405+0000","last_undegraded":"2026-04-01T09:53:11.905405+0000","last_fullsized":"2026-04-01T09:53:11.905405+0000","mapping_epoch":14,"log_start":"0'0","ondisk_log_start":"0'0","created":14,"last_epoch_clean":15,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_clean_scrub_stamp":"2026-04-01T09:53:08.882862+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T16:36:21.254268+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1],"acting":[7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"1.3","version":"15'1","reported_seq":21,"reported_epoch":17,"state":"active+clean","last_fresh":"2026-04-01T09:53:11.905410+0000","last_change":"2026-04-01T09:53:11.905410+0000","last_active":"2026-04-01T09:53:11.905410+0000","last_peered":"2026-04-01T09:53:11.905410+0000","last_clean":"2026-04-01T09:53:11.905410+0000","last_became_active":"2026-04-01T09:53:09.896854+0000","last_became_peered":"2026-04-01T09:53:09.896854+0000","last_unstale":"2026-04-01T09:53:11.905410+0000","last_undegraded":"2026-04-01T09:53:11.905410+0000","last_fullsized":"2026-04-01T09:53:11.905410+0000","mapping_epoch":14,"log_start":"0'0","ondisk_log_start":"0'0","created":14,"last_epoch_clean":15,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_clean_scrub_stamp":"2026-04-01T09:53:08.882862+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T11:48:49.915142+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00021853899999999999,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6],"acting":[1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.2","version":"17'2","reported_seq":22,"reported_epoch":17,"state":"active+clean","last_fresh":"2026-04-01T09:53:11.907669+0000","last_change":"2026-04-01T09:53:11.905867+0000","last_active":"2026-04-01T09:53:11.907669+0000","last_peered":"2026-04-01T09:53:11.907669+0000","last_clean":"2026-04-01T09:53:11.907669+0000","last_became_active":"2026-04-01T09:53:10.250223+0000","last_became_peered":"2026-04-01T09:53:10.250223+0000","last_unstale":"2026-04-01T09:53:11.907669+0000","last_undegraded":"2026-04-01T09:53:11.907669+0000","last_fullsized":"2026-04-01T09:53:11.907669+0000","mapping_epoch":14,"log_start":"0'0","ondisk_log_start":"0'0","created":14,"last_epoch_clean":15,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_clean_scrub_stamp":"2026-04-01T09:53:08.882862+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T21:50:52.206073+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00037328799999999999,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0],"acting":[7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.1","version":"0'0","reported_seq":20,"reported_epoch":17,"state":"active+clean","last_fresh":"2026-04-01T09:53:12.126690+0000","last_change":"2026-04-01T09:53:12.126878+0000","last_active":"2026-04-01T09:53:12.126690+0000","last_peered":"2026-04-01T09:53:12.126690+0000","last_clean":"2026-04-01T09:53:12.126690+0000","last_became_active":"2026-04-01T09:53:10.249822+0000","last_became_peered":"2026-04-01T09:53:10.249822+0000","last_unstale":"2026-04-01T09:53:12.126690+0000","last_undegraded":"2026-04-01T09:53:12.126690+0000","last_fullsized":"2026-04-01T09:53:12.126690+0000","mapping_epoch":14,"log_start":"0'0","ondisk_log_start":"0'0","created":14,"last_epoch_clean":15,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_clean_scrub_stamp":"2026-04-01T09:53:08.882862+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T12:02:35.713359+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00027929300000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0],"acting":[5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.0","version":"0'0","reported_seq":20,"reported_epoch":17,"state":"active+clean","last_fresh":"2026-04-01T09:53:11.905726+0000","last_change":"2026-04-01T09:53:11.905726+0000","last_active":"2026-04-01T09:53:11.905726+0000","last_peered":"2026-04-01T09:53:11.905726+0000","last_clean":"2026-04-01T09:53:11.905726+0000","last_became_active":"2026-04-01T09:53:10.250563+0000","last_became_peered":"2026-04-01T09:53:10.250563+0000","last_unstale":"2026-04-01T09:53:11.905726+0000","last_undegraded":"2026-04-01T09:53:11.905726+0000","last_fullsized":"2026-04-01T09:53:11.905726+0000","mapping_epoch":14,"log_start":"0'0","ondisk_log_start":"0'0","created":14,"last_epoch_clean":15,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-04-01T09:53:08.882862+0000","last_clean_scrub_stamp":"2026-04-01T09:53:08.882862+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T13:25:08.529486+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00025401599999999997,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0],"acting":[7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[{"start":"2","length":"1"}]}],"pool_stats":[{"poolid":2,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":925696,"data_stored":918560,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":2,"acting":2,"num_store_stats":2},{"poolid":1,"num_pg":8,"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":38,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":3,"ondisk_log_size":3,"up":16,"acting":16,"num_store_stats":8}],"osd_stats":[{"osd":7,"up_from":13,"seq":55834574852,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27604,"kb_used_data":620,"kb_used_omap":6,"kb_used_meta":26809,"kb_avail":94344236,"statfs":{"total":96636764160,"available":96608497664,"internally_reserved":0,"allocated":634880,"data_stored":512439,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":6822,"internal_metadata":27452762},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":13,"seq":55834574851,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27020,"kb_used_data":164,"kb_used_omap":6,"kb_used_meta":26809,"kb_avail":94344820,"statfs":{"total":96636764160,"available":96609095680,"internally_reserved":0,"allocated":167936,"data_stored":53140,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":6826,"internal_metadata":27452758},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":13,"seq":55834574851,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27020,"kb_used_data":164,"kb_used_omap":6,"kb_used_meta":26809,"kb_avail":94344820,"statfs":{"total":96636764160,"available":96609095680,"internally_reserved":0,"allocated":167936,"data_stored":53140,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":6827,"internal_metadata":27452757},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":13,"seq":55834574851,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27020,"kb_used_data":164,"kb_used_omap":6,"kb_used_meta":26809,"kb_avail":94344820,"statfs":{"total":96636764160,"available":96609095680,"internally_reserved":0,"allocated":167936,"data_stored":53140,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":6827,"internal_metadata":27452757},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":13,"seq":55834574852,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27020,"kb_used_data":164,"kb_used_omap":6,"kb_used_meta":26809,"kb_avail":94344820,"statfs":{"total":96636764160,"available":96609095680,"internally_reserved":0,"allocated":167936,"data_stored":53140,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":6825,"internal_metadata":27452759},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]},{"osd":2,"up_from":13,"seq":55834574851,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27020,"kb_used_data":164,"kb_used_omap":6,"kb_used_meta":26809,"kb_avail":94344820,"statfs":{"total":96636764160,"available":96609095680,"internally_reserved":0,"allocated":167936,"data_stored":53140,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":6827,"internal_metadata":27452757},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574851,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27600,"kb_used_data":616,"kb_used_omap":6,"kb_used_meta":26809,"kb_avail":94344240,"statfs":{"total":96636764160,"available":96608501760,"internally_reserved":0,"allocated":630784,"data_stored":512420,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":6827,"internal_metadata":27452757},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":13,"seq":55834574851,"num_pgs":4,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27024,"kb_used_data":168,"kb_used_omap":6,"kb_used_meta":26809,"kb_avail":94344816,"statfs":{"total":96636764160,"available":96609091584,"internally_reserved":0,"allocated":172032,"data_stored":53159,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":6826,"internal_metadata":27452758},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-04-01T09:53:16.621 INFO:tasks.ceph.ceph_manager.ceph:clean! 2026-04-01T09:53:16.621 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-04-01T09:53:16.621 INFO:tasks.ceph.ceph_manager.ceph:wait_until_healthy 2026-04-01T09:53:16.621 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph health --format=json 2026-04-01T09:53:16.858 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:53:16.859 INFO:teuthology.orchestra.run.vm00.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-04-01T09:53:16.870 INFO:tasks.ceph.ceph_manager.ceph:wait_until_healthy done 2026-04-01T09:53:16.870 INFO:teuthology.run_tasks:Running task openssl_keys... 2026-04-01T09:53:16.873 INFO:teuthology.run_tasks:Running task rgw... 2026-04-01T09:53:16.877 DEBUG:tasks.rgw:config is {'client.0': None, 'client.1': None, 'client.2': None} 2026-04-01T09:53:16.877 DEBUG:tasks.rgw:client list is dict_keys(['client.0', 'client.1', 'client.2']) 2026-04-01T09:53:16.877 INFO:tasks.rgw:Creating data pools 2026-04-01T09:53:16.877 DEBUG:tasks.rgw:Obtaining remote for client client.0 2026-04-01T09:53:16.877 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph osd pool create default.rgw.buckets.data 64 64 --cluster ceph 2026-04-01T09:53:17.653 INFO:teuthology.orchestra.run.vm00.stderr:pool 'default.rgw.buckets.data' created 2026-04-01T09:53:17.670 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph osd pool application enable default.rgw.buckets.data rgw --cluster ceph 2026-04-01T09:53:18.657 INFO:teuthology.orchestra.run.vm00.stderr:enabled application 'rgw' on pool 'default.rgw.buckets.data' 2026-04-01T09:53:18.686 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph osd pool create default.rgw.buckets.index 64 64 --cluster ceph 2026-04-01T09:53:19.658 INFO:teuthology.orchestra.run.vm00.stderr:pool 'default.rgw.buckets.index' created 2026-04-01T09:53:19.681 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph osd pool application enable default.rgw.buckets.index rgw --cluster ceph 2026-04-01T09:53:20.672 INFO:teuthology.orchestra.run.vm00.stderr:enabled application 'rgw' on pool 'default.rgw.buckets.index' 2026-04-01T09:53:20.700 DEBUG:tasks.rgw:Obtaining remote for client client.1 2026-04-01T09:53:20.701 DEBUG:teuthology.orchestra.run.vm03:> sudo ceph osd pool create default.rgw.buckets.data 64 64 --cluster ceph 2026-04-01T09:53:20.910 INFO:teuthology.orchestra.run.vm03.stderr:pool 'default.rgw.buckets.data' already exists 2026-04-01T09:53:20.922 DEBUG:teuthology.orchestra.run.vm03:> sudo ceph osd pool application enable default.rgw.buckets.data rgw --cluster ceph 2026-04-01T09:53:21.675 INFO:teuthology.orchestra.run.vm03.stderr:enabled application 'rgw' on pool 'default.rgw.buckets.data' 2026-04-01T09:53:21.686 DEBUG:teuthology.orchestra.run.vm03:> sudo ceph osd pool create default.rgw.buckets.index 64 64 --cluster ceph 2026-04-01T09:53:21.895 INFO:teuthology.orchestra.run.vm03.stderr:pool 'default.rgw.buckets.index' already exists 2026-04-01T09:53:21.907 DEBUG:teuthology.orchestra.run.vm03:> sudo ceph osd pool application enable default.rgw.buckets.index rgw --cluster ceph 2026-04-01T09:53:22.698 INFO:teuthology.orchestra.run.vm03.stderr:enabled application 'rgw' on pool 'default.rgw.buckets.index' 2026-04-01T09:53:22.712 DEBUG:tasks.rgw:Obtaining remote for client client.2 2026-04-01T09:53:22.712 DEBUG:teuthology.orchestra.run.vm07:> sudo ceph osd pool create default.rgw.buckets.data 64 64 --cluster ceph 2026-04-01T09:53:22.936 INFO:teuthology.orchestra.run.vm07.stderr:pool 'default.rgw.buckets.data' already exists 2026-04-01T09:53:22.950 DEBUG:teuthology.orchestra.run.vm07:> sudo ceph osd pool application enable default.rgw.buckets.data rgw --cluster ceph 2026-04-01T09:53:23.699 INFO:teuthology.orchestra.run.vm07.stderr:enabled application 'rgw' on pool 'default.rgw.buckets.data' 2026-04-01T09:53:23.716 DEBUG:teuthology.orchestra.run.vm07:> sudo ceph osd pool create default.rgw.buckets.index 64 64 --cluster ceph 2026-04-01T09:53:23.922 INFO:teuthology.orchestra.run.vm07.stderr:pool 'default.rgw.buckets.index' already exists 2026-04-01T09:53:23.935 DEBUG:teuthology.orchestra.run.vm07:> sudo ceph osd pool application enable default.rgw.buckets.index rgw --cluster ceph 2026-04-01T09:53:24.712 INFO:teuthology.orchestra.run.vm07.stderr:enabled application 'rgw' on pool 'default.rgw.buckets.index' 2026-04-01T09:53:24.726 DEBUG:tasks.rgw:Pools created 2026-04-01T09:53:24.726 INFO:tasks.util.rgw:rgwadmin: client.0 : ['user', 'list'] 2026-04-01T09:53:24.726 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.0', '--cluster', 'ceph', 'user', 'list'] 2026-04-01T09:53:24.726 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph user list 2026-04-01T09:53:24.760 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-04-01T09:53:24.760 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-04-01T09:53:26.767 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.765+0000 7f1828754900 20 rados->read ofs=0 len=0 2026-04-01T09:53:26.768 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.766+0000 7f1828754900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:26.768 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.766+0000 7f1828754900 20 realm 2026-04-01T09:53:26.768 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.766+0000 7f1828754900 20 rados->read ofs=0 len=0 2026-04-01T09:53:26.768 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.766+0000 7f1828754900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:26.768 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.766+0000 7f1828754900 4 RGWPeriod::init failed to init realm id : (2) No such file or directory 2026-04-01T09:53:26.768 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.766+0000 7f1828754900 20 rados->read ofs=0 len=0 2026-04-01T09:53:26.768 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.767+0000 7f1828754900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:26.768 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.767+0000 7f1828754900 20 rados->read ofs=0 len=0 2026-04-01T09:53:26.769 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.767+0000 7f1828754900 20 rados_obj.operate() r=0 bl.length=46 2026-04-01T09:53:26.769 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.767+0000 7f1828754900 20 rados->read ofs=0 len=0 2026-04-01T09:53:26.769 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.768+0000 7f1828754900 20 rados_obj.operate() r=0 bl.length=1060 2026-04-01T09:53:26.769 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.768+0000 7f1828754900 20 searching for the correct realm 2026-04-01T09:53:26.779 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.778+0000 7f1828754900 20 RGWRados::pool_iterate: got zonegroup_info.9b133e09-d229-457d-a14c-135340d8b7fa 2026-04-01T09:53:26.779 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.778+0000 7f1828754900 20 RGWRados::pool_iterate: got default.zonegroup. 2026-04-01T09:53:26.779 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.778+0000 7f1828754900 20 RGWRados::pool_iterate: got zone_info.0ece0ff1-62c2-4422-8d5e-5d6544aecfef 2026-04-01T09:53:26.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.778+0000 7f1828754900 20 RGWRados::pool_iterate: got default.zone. 2026-04-01T09:53:26.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.778+0000 7f1828754900 20 RGWRados::pool_iterate: got zone_names.default 2026-04-01T09:53:26.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.778+0000 7f1828754900 20 RGWRados::pool_iterate: got zonegroups_names.default 2026-04-01T09:53:26.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.778+0000 7f1828754900 20 rados->read ofs=0 len=0 2026-04-01T09:53:26.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.778+0000 7f1828754900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:26.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.778+0000 7f1828754900 20 rados->read ofs=0 len=0 2026-04-01T09:53:26.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.778+0000 7f1828754900 20 rados_obj.operate() r=0 bl.length=46 2026-04-01T09:53:26.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.778+0000 7f1828754900 20 rados->read ofs=0 len=0 2026-04-01T09:53:26.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.778+0000 7f1828754900 20 rados_obj.operate() r=0 bl.length=436 2026-04-01T09:53:26.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.778+0000 7f1828754900 20 zone default found 2026-04-01T09:53:26.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.778+0000 7f1828754900 4 Realm: () 2026-04-01T09:53:26.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.778+0000 7f1828754900 4 ZoneGroup: default (9b133e09-d229-457d-a14c-135340d8b7fa) 2026-04-01T09:53:26.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.778+0000 7f1828754900 4 Zone: default (0ece0ff1-62c2-4422-8d5e-5d6544aecfef) 2026-04-01T09:53:26.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.778+0000 7f1828754900 10 cannot find current period zonegroup using local zonegroup configuration 2026-04-01T09:53:26.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.778+0000 7f1828754900 20 zonegroup default 2026-04-01T09:53:26.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.778+0000 7f1828754900 20 rados->read ofs=0 len=0 2026-04-01T09:53:26.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.779+0000 7f1828754900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:26.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:26.779+0000 7f1828754900 20 rados->read ofs=0 len=0 2026-04-01T09:53:28.757 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:28.756+0000 7f1828754900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:28.757 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:28.756+0000 7f1828754900 20 rados->read ofs=0 len=0 2026-04-01T09:53:28.757 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:28.756+0000 7f1828754900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:28.757 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:28.756+0000 7f1828754900 20 started sync module instance, tier type = 2026-04-01T09:53:28.757 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:28.756+0000 7f1828754900 20 started zone id=0ece0ff1-62c2-4422-8d5e-5d6544aecfef (name=default) with tier type = 2026-04-01T09:53:30.766 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.764+0000 7f1828754900 20 add_watcher() i=0 2026-04-01T09:53:30.770 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.769+0000 7f1828754900 20 add_watcher() i=2 2026-04-01T09:53:30.770 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.769+0000 7f1828754900 20 add_watcher() i=3 2026-04-01T09:53:30.771 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.770+0000 7f1828754900 20 add_watcher() i=4 2026-04-01T09:53:30.771 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.770+0000 7f1828754900 20 add_watcher() i=7 2026-04-01T09:53:30.772 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.771+0000 7f1828754900 20 add_watcher() i=1 2026-04-01T09:53:30.774 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.773+0000 7f1828754900 20 add_watcher() i=6 2026-04-01T09:53:30.775 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.773+0000 7f1828754900 20 add_watcher() i=5 2026-04-01T09:53:30.775 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.773+0000 7f1828754900 2 all 8 watchers are set, enabling cache 2026-04-01T09:53:30.776 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.774+0000 7f181dffb640 5 boost::asio::awaitable, obj_version> > logback_generations::read(const DoutPrefixProvider*):446: oid=data_loggenerations_metadata not found 2026-04-01T09:53:30.776 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.774+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.0 2026-04-01T09:53:30.776 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.774+0000 7f181dffb640 20 do_open: entering 2026-04-01T09:53:30.776 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.775+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.0 does not exist 2026-04-01T09:53:30.776 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.775+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.1 2026-04-01T09:53:30.776 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.775+0000 7f181d7fa640 20 do_open: entering 2026-04-01T09:53:30.777 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.776+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.1 does not exist 2026-04-01T09:53:30.777 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.776+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.2 2026-04-01T09:53:30.777 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.776+0000 7f181cff9640 20 do_open: entering 2026-04-01T09:53:30.778 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.776+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.2 does not exist 2026-04-01T09:53:30.778 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.776+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.3 2026-04-01T09:53:30.778 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.776+0000 7f18254d2640 20 do_open: entering 2026-04-01T09:53:30.778 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.777+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.3 does not exist 2026-04-01T09:53:30.778 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.777+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.4 2026-04-01T09:53:30.778 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.777+0000 7f181f7fe640 20 do_open: entering 2026-04-01T09:53:30.779 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.777+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.4 does not exist 2026-04-01T09:53:30.779 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.777+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.5 2026-04-01T09:53:30.779 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.777+0000 7f181effd640 20 do_open: entering 2026-04-01T09:53:30.779 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.778+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.5 does not exist 2026-04-01T09:53:30.779 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.778+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.6 2026-04-01T09:53:30.779 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.778+0000 7f181e7fc640 20 do_open: entering 2026-04-01T09:53:30.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.778+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.6 does not exist 2026-04-01T09:53:30.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.778+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.7 2026-04-01T09:53:30.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.778+0000 7f182675b640 20 do_open: entering 2026-04-01T09:53:30.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.779+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.7 does not exist 2026-04-01T09:53:30.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.779+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.8 2026-04-01T09:53:30.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.779+0000 7f181dffb640 20 do_open: entering 2026-04-01T09:53:30.780 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.779+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.8 does not exist 2026-04-01T09:53:30.781 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.779+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.9 2026-04-01T09:53:30.781 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.779+0000 7f181d7fa640 20 do_open: entering 2026-04-01T09:53:30.781 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.779+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.9 does not exist 2026-04-01T09:53:30.781 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.779+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.10 2026-04-01T09:53:30.781 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.779+0000 7f181cff9640 20 do_open: entering 2026-04-01T09:53:30.781 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.780+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.10 does not exist 2026-04-01T09:53:30.781 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.780+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.11 2026-04-01T09:53:30.781 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.780+0000 7f18254d2640 20 do_open: entering 2026-04-01T09:53:30.782 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.780+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.11 does not exist 2026-04-01T09:53:30.782 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.780+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.12 2026-04-01T09:53:30.782 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.780+0000 7f181f7fe640 20 do_open: entering 2026-04-01T09:53:30.782 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.780+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.12 does not exist 2026-04-01T09:53:30.782 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.780+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.13 2026-04-01T09:53:30.782 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.780+0000 7f181effd640 20 do_open: entering 2026-04-01T09:53:30.782 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.781+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.13 does not exist 2026-04-01T09:53:30.782 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.781+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.14 2026-04-01T09:53:30.782 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.781+0000 7f181e7fc640 20 do_open: entering 2026-04-01T09:53:30.782 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.781+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.14 does not exist 2026-04-01T09:53:30.782 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.781+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.15 2026-04-01T09:53:30.782 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.781+0000 7f182675b640 20 do_open: entering 2026-04-01T09:53:30.782 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.781+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.15 does not exist 2026-04-01T09:53:30.783 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.781+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.16 2026-04-01T09:53:30.783 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.781+0000 7f181dffb640 20 do_open: entering 2026-04-01T09:53:30.783 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.781+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.16 does not exist 2026-04-01T09:53:30.783 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.781+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.17 2026-04-01T09:53:30.783 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.781+0000 7f181d7fa640 20 do_open: entering 2026-04-01T09:53:30.783 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.782+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.17 does not exist 2026-04-01T09:53:30.783 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.782+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.18 2026-04-01T09:53:30.783 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.782+0000 7f181cff9640 20 do_open: entering 2026-04-01T09:53:30.783 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.782+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.18 does not exist 2026-04-01T09:53:30.783 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.782+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.19 2026-04-01T09:53:30.783 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.782+0000 7f18254d2640 20 do_open: entering 2026-04-01T09:53:30.783 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.782+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.19 does not exist 2026-04-01T09:53:30.783 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.782+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.20 2026-04-01T09:53:30.783 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.782+0000 7f181f7fe640 20 do_open: entering 2026-04-01T09:53:30.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.782+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.20 does not exist 2026-04-01T09:53:30.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.782+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.21 2026-04-01T09:53:30.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.782+0000 7f181effd640 20 do_open: entering 2026-04-01T09:53:30.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.782+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.21 does not exist 2026-04-01T09:53:30.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.782+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.22 2026-04-01T09:53:30.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.782+0000 7f181e7fc640 20 do_open: entering 2026-04-01T09:53:30.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.783+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.22 does not exist 2026-04-01T09:53:30.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.783+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.23 2026-04-01T09:53:30.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.783+0000 7f182675b640 20 do_open: entering 2026-04-01T09:53:30.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.783+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.23 does not exist 2026-04-01T09:53:30.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.783+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.24 2026-04-01T09:53:30.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.783+0000 7f181dffb640 20 do_open: entering 2026-04-01T09:53:30.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.783+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.24 does not exist 2026-04-01T09:53:30.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.783+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.25 2026-04-01T09:53:30.784 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.783+0000 7f181d7fa640 20 do_open: entering 2026-04-01T09:53:30.785 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.783+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.25 does not exist 2026-04-01T09:53:30.785 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.783+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.26 2026-04-01T09:53:30.785 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.783+0000 7f181cff9640 20 do_open: entering 2026-04-01T09:53:30.785 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.783+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.26 does not exist 2026-04-01T09:53:30.785 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.783+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.27 2026-04-01T09:53:30.785 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.783+0000 7f18254d2640 20 do_open: entering 2026-04-01T09:53:30.785 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.784+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.27 does not exist 2026-04-01T09:53:30.785 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.784+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.28 2026-04-01T09:53:30.785 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.784+0000 7f181f7fe640 20 do_open: entering 2026-04-01T09:53:30.785 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.784+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.28 does not exist 2026-04-01T09:53:30.785 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.784+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.29 2026-04-01T09:53:30.785 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.784+0000 7f181effd640 20 do_open: entering 2026-04-01T09:53:30.785 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.784+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.29 does not exist 2026-04-01T09:53:30.786 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.784+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.30 2026-04-01T09:53:30.786 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.784+0000 7f181e7fc640 20 do_open: entering 2026-04-01T09:53:30.786 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.784+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.30 does not exist 2026-04-01T09:53:30.786 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.784+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.31 2026-04-01T09:53:30.786 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.784+0000 7f182675b640 20 do_open: entering 2026-04-01T09:53:30.786 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.785+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.31 does not exist 2026-04-01T09:53:30.786 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.785+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.32 2026-04-01T09:53:30.786 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.785+0000 7f181dffb640 20 do_open: entering 2026-04-01T09:53:30.786 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.785+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.32 does not exist 2026-04-01T09:53:30.786 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.785+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.33 2026-04-01T09:53:30.786 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.785+0000 7f181d7fa640 20 do_open: entering 2026-04-01T09:53:30.786 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.785+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.33 does not exist 2026-04-01T09:53:30.786 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.785+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.34 2026-04-01T09:53:30.786 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.785+0000 7f181cff9640 20 do_open: entering 2026-04-01T09:53:30.787 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.785+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.34 does not exist 2026-04-01T09:53:30.787 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.785+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.35 2026-04-01T09:53:30.787 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.785+0000 7f18254d2640 20 do_open: entering 2026-04-01T09:53:30.787 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.785+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.35 does not exist 2026-04-01T09:53:30.787 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.785+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.36 2026-04-01T09:53:30.787 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.785+0000 7f181f7fe640 20 do_open: entering 2026-04-01T09:53:30.787 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.786+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.36 does not exist 2026-04-01T09:53:30.787 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.786+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.37 2026-04-01T09:53:30.787 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.786+0000 7f181effd640 20 do_open: entering 2026-04-01T09:53:30.787 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.786+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.37 does not exist 2026-04-01T09:53:30.787 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.786+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.38 2026-04-01T09:53:30.787 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.786+0000 7f181e7fc640 20 do_open: entering 2026-04-01T09:53:30.787 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.786+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.38 does not exist 2026-04-01T09:53:30.787 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.786+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.39 2026-04-01T09:53:30.787 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.786+0000 7f182675b640 20 do_open: entering 2026-04-01T09:53:30.788 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.786+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.39 does not exist 2026-04-01T09:53:30.788 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.786+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.40 2026-04-01T09:53:30.788 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.786+0000 7f181dffb640 20 do_open: entering 2026-04-01T09:53:30.788 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.786+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.40 does not exist 2026-04-01T09:53:30.788 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.786+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.41 2026-04-01T09:53:30.788 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.786+0000 7f181d7fa640 20 do_open: entering 2026-04-01T09:53:30.788 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.787+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.41 does not exist 2026-04-01T09:53:30.788 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.787+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.42 2026-04-01T09:53:30.788 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.787+0000 7f181cff9640 20 do_open: entering 2026-04-01T09:53:30.788 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.787+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.42 does not exist 2026-04-01T09:53:30.788 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.787+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.43 2026-04-01T09:53:30.788 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.787+0000 7f18254d2640 20 do_open: entering 2026-04-01T09:53:30.788 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.787+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.43 does not exist 2026-04-01T09:53:30.789 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.787+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.44 2026-04-01T09:53:30.789 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.787+0000 7f181f7fe640 20 do_open: entering 2026-04-01T09:53:30.789 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.787+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.44 does not exist 2026-04-01T09:53:30.789 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.787+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.45 2026-04-01T09:53:30.789 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.787+0000 7f181effd640 20 do_open: entering 2026-04-01T09:53:30.789 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.787+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.45 does not exist 2026-04-01T09:53:30.789 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.787+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.46 2026-04-01T09:53:30.789 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.787+0000 7f181e7fc640 20 do_open: entering 2026-04-01T09:53:30.789 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.788+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.46 does not exist 2026-04-01T09:53:30.789 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.788+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.47 2026-04-01T09:53:30.789 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.788+0000 7f182675b640 20 do_open: entering 2026-04-01T09:53:30.789 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.788+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.47 does not exist 2026-04-01T09:53:30.789 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.788+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.48 2026-04-01T09:53:30.789 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.788+0000 7f181dffb640 20 do_open: entering 2026-04-01T09:53:30.789 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.788+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.48 does not exist 2026-04-01T09:53:30.789 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.788+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.49 2026-04-01T09:53:30.789 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.788+0000 7f181d7fa640 20 do_open: entering 2026-04-01T09:53:30.790 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.788+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.49 does not exist 2026-04-01T09:53:30.790 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.788+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.50 2026-04-01T09:53:30.790 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.788+0000 7f181cff9640 20 do_open: entering 2026-04-01T09:53:30.790 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.788+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.50 does not exist 2026-04-01T09:53:30.790 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.788+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.51 2026-04-01T09:53:30.790 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.788+0000 7f18254d2640 20 do_open: entering 2026-04-01T09:53:30.790 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.789+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.51 does not exist 2026-04-01T09:53:30.790 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.789+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.52 2026-04-01T09:53:30.790 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.789+0000 7f181f7fe640 20 do_open: entering 2026-04-01T09:53:30.790 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.789+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.52 does not exist 2026-04-01T09:53:30.790 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.789+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.53 2026-04-01T09:53:30.790 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.789+0000 7f181effd640 20 do_open: entering 2026-04-01T09:53:30.790 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.789+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.53 does not exist 2026-04-01T09:53:30.790 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.789+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.54 2026-04-01T09:53:30.790 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.789+0000 7f181e7fc640 20 do_open: entering 2026-04-01T09:53:30.791 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.789+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.54 does not exist 2026-04-01T09:53:30.791 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.789+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.55 2026-04-01T09:53:30.791 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.789+0000 7f182675b640 20 do_open: entering 2026-04-01T09:53:30.791 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.789+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.55 does not exist 2026-04-01T09:53:30.791 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.789+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.56 2026-04-01T09:53:30.791 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.789+0000 7f181dffb640 20 do_open: entering 2026-04-01T09:53:30.791 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.790+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.56 does not exist 2026-04-01T09:53:30.791 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.790+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.57 2026-04-01T09:53:30.791 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.790+0000 7f181d7fa640 20 do_open: entering 2026-04-01T09:53:30.791 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.790+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.57 does not exist 2026-04-01T09:53:30.791 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.790+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.58 2026-04-01T09:53:30.791 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.790+0000 7f181cff9640 20 do_open: entering 2026-04-01T09:53:30.791 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.790+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.58 does not exist 2026-04-01T09:53:30.791 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.790+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.59 2026-04-01T09:53:30.791 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.790+0000 7f18254d2640 20 do_open: entering 2026-04-01T09:53:30.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.790+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.59 does not exist 2026-04-01T09:53:30.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.790+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.60 2026-04-01T09:53:30.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.790+0000 7f181f7fe640 20 do_open: entering 2026-04-01T09:53:30.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.790+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.60 does not exist 2026-04-01T09:53:30.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.790+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.61 2026-04-01T09:53:30.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.790+0000 7f181effd640 20 do_open: entering 2026-04-01T09:53:30.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.790+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.61 does not exist 2026-04-01T09:53:30.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.790+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.62 2026-04-01T09:53:30.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.790+0000 7f181e7fc640 20 do_open: entering 2026-04-01T09:53:30.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.791+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.62 does not exist 2026-04-01T09:53:30.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.791+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.63 2026-04-01T09:53:30.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.791+0000 7f182675b640 20 do_open: entering 2026-04-01T09:53:30.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.791+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.63 does not exist 2026-04-01T09:53:30.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.791+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.64 2026-04-01T09:53:30.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.791+0000 7f181dffb640 20 do_open: entering 2026-04-01T09:53:30.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.791+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.64 does not exist 2026-04-01T09:53:30.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.791+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.65 2026-04-01T09:53:30.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.791+0000 7f181d7fa640 20 do_open: entering 2026-04-01T09:53:30.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.791+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.65 does not exist 2026-04-01T09:53:30.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.791+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.66 2026-04-01T09:53:30.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.791+0000 7f181cff9640 20 do_open: entering 2026-04-01T09:53:30.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.791+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.66 does not exist 2026-04-01T09:53:30.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.791+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.67 2026-04-01T09:53:30.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.791+0000 7f18254d2640 20 do_open: entering 2026-04-01T09:53:30.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.791+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.67 does not exist 2026-04-01T09:53:30.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.791+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.68 2026-04-01T09:53:30.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.791+0000 7f181f7fe640 20 do_open: entering 2026-04-01T09:53:30.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.792+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.68 does not exist 2026-04-01T09:53:30.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.792+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.69 2026-04-01T09:53:30.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.792+0000 7f181effd640 20 do_open: entering 2026-04-01T09:53:30.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.792+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.69 does not exist 2026-04-01T09:53:30.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.792+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.70 2026-04-01T09:53:30.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.792+0000 7f181e7fc640 20 do_open: entering 2026-04-01T09:53:30.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.792+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.70 does not exist 2026-04-01T09:53:30.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.792+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.71 2026-04-01T09:53:30.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.792+0000 7f182675b640 20 do_open: entering 2026-04-01T09:53:30.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.792+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.71 does not exist 2026-04-01T09:53:30.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.792+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.72 2026-04-01T09:53:30.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.792+0000 7f181dffb640 20 do_open: entering 2026-04-01T09:53:30.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.792+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.72 does not exist 2026-04-01T09:53:30.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.792+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.73 2026-04-01T09:53:30.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.792+0000 7f181d7fa640 20 do_open: entering 2026-04-01T09:53:30.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.793+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.73 does not exist 2026-04-01T09:53:30.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.793+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.74 2026-04-01T09:53:30.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.793+0000 7f181cff9640 20 do_open: entering 2026-04-01T09:53:30.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.793+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.74 does not exist 2026-04-01T09:53:30.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.793+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.75 2026-04-01T09:53:30.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.793+0000 7f18254d2640 20 do_open: entering 2026-04-01T09:53:30.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.793+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.75 does not exist 2026-04-01T09:53:30.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.793+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.76 2026-04-01T09:53:30.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.793+0000 7f181f7fe640 20 do_open: entering 2026-04-01T09:53:30.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.793+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.76 does not exist 2026-04-01T09:53:30.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.793+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.77 2026-04-01T09:53:30.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.793+0000 7f181effd640 20 do_open: entering 2026-04-01T09:53:30.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.793+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.77 does not exist 2026-04-01T09:53:30.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.793+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.78 2026-04-01T09:53:30.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.793+0000 7f181e7fc640 20 do_open: entering 2026-04-01T09:53:30.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.794+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.78 does not exist 2026-04-01T09:53:30.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.794+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.79 2026-04-01T09:53:30.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.794+0000 7f182675b640 20 do_open: entering 2026-04-01T09:53:30.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.794+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.79 does not exist 2026-04-01T09:53:30.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.794+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.80 2026-04-01T09:53:30.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.794+0000 7f181dffb640 20 do_open: entering 2026-04-01T09:53:30.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.794+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.80 does not exist 2026-04-01T09:53:30.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.794+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.81 2026-04-01T09:53:30.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.794+0000 7f181d7fa640 20 do_open: entering 2026-04-01T09:53:30.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.794+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.81 does not exist 2026-04-01T09:53:30.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.794+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.82 2026-04-01T09:53:30.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.794+0000 7f181cff9640 20 do_open: entering 2026-04-01T09:53:30.796 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.794+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.82 does not exist 2026-04-01T09:53:30.796 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.794+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.83 2026-04-01T09:53:30.796 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.794+0000 7f18254d2640 20 do_open: entering 2026-04-01T09:53:30.796 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.794+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.83 does not exist 2026-04-01T09:53:30.796 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.794+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.84 2026-04-01T09:53:30.796 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.794+0000 7f181f7fe640 20 do_open: entering 2026-04-01T09:53:30.796 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.795+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.84 does not exist 2026-04-01T09:53:30.796 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.795+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.85 2026-04-01T09:53:30.796 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.795+0000 7f181effd640 20 do_open: entering 2026-04-01T09:53:30.796 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.795+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.85 does not exist 2026-04-01T09:53:30.796 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.795+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.86 2026-04-01T09:53:30.796 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.795+0000 7f181e7fc640 20 do_open: entering 2026-04-01T09:53:30.796 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.795+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.86 does not exist 2026-04-01T09:53:30.796 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.795+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.87 2026-04-01T09:53:30.796 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.795+0000 7f182675b640 20 do_open: entering 2026-04-01T09:53:30.797 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.795+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.87 does not exist 2026-04-01T09:53:30.797 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.795+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.88 2026-04-01T09:53:30.797 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.795+0000 7f181dffb640 20 do_open: entering 2026-04-01T09:53:30.797 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.795+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.88 does not exist 2026-04-01T09:53:30.797 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.795+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.89 2026-04-01T09:53:30.797 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.795+0000 7f181d7fa640 20 do_open: entering 2026-04-01T09:53:30.797 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.796+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.89 does not exist 2026-04-01T09:53:30.797 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.796+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.90 2026-04-01T09:53:30.797 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.796+0000 7f181cff9640 20 do_open: entering 2026-04-01T09:53:30.797 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.796+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.90 does not exist 2026-04-01T09:53:30.797 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.796+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.91 2026-04-01T09:53:30.797 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.796+0000 7f18254d2640 20 do_open: entering 2026-04-01T09:53:30.797 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.796+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.91 does not exist 2026-04-01T09:53:30.797 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.796+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.92 2026-04-01T09:53:30.797 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.796+0000 7f181f7fe640 20 do_open: entering 2026-04-01T09:53:30.797 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.796+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.92 does not exist 2026-04-01T09:53:30.798 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.796+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.93 2026-04-01T09:53:30.798 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.796+0000 7f181effd640 20 do_open: entering 2026-04-01T09:53:30.798 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.796+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.93 does not exist 2026-04-01T09:53:30.798 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.796+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.94 2026-04-01T09:53:30.798 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.796+0000 7f181e7fc640 20 do_open: entering 2026-04-01T09:53:30.798 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.797+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.94 does not exist 2026-04-01T09:53:30.798 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.797+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.95 2026-04-01T09:53:30.798 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.797+0000 7f182675b640 20 do_open: entering 2026-04-01T09:53:30.798 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.797+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.95 does not exist 2026-04-01T09:53:30.798 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.797+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.96 2026-04-01T09:53:30.798 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.797+0000 7f181dffb640 20 do_open: entering 2026-04-01T09:53:30.798 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.797+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.96 does not exist 2026-04-01T09:53:30.798 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.797+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.97 2026-04-01T09:53:30.798 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.797+0000 7f181d7fa640 20 do_open: entering 2026-04-01T09:53:30.799 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.797+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.97 does not exist 2026-04-01T09:53:30.799 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.797+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.98 2026-04-01T09:53:30.799 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.797+0000 7f181cff9640 20 do_open: entering 2026-04-01T09:53:30.799 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.797+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.98 does not exist 2026-04-01T09:53:30.799 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.797+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.99 2026-04-01T09:53:30.799 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.797+0000 7f18254d2640 20 do_open: entering 2026-04-01T09:53:30.799 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.798+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.99 does not exist 2026-04-01T09:53:30.799 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.798+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.100 2026-04-01T09:53:30.799 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.798+0000 7f181f7fe640 20 do_open: entering 2026-04-01T09:53:30.799 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.798+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.100 does not exist 2026-04-01T09:53:30.799 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.798+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.101 2026-04-01T09:53:30.799 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.798+0000 7f181effd640 20 do_open: entering 2026-04-01T09:53:30.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.798+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.101 does not exist 2026-04-01T09:53:30.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.798+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.102 2026-04-01T09:53:30.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.798+0000 7f181e7fc640 20 do_open: entering 2026-04-01T09:53:30.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.798+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.102 does not exist 2026-04-01T09:53:30.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.798+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.103 2026-04-01T09:53:30.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.798+0000 7f182675b640 20 do_open: entering 2026-04-01T09:53:30.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.799+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.103 does not exist 2026-04-01T09:53:30.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.799+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.104 2026-04-01T09:53:30.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.799+0000 7f181dffb640 20 do_open: entering 2026-04-01T09:53:30.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.799+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.104 does not exist 2026-04-01T09:53:30.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.799+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.105 2026-04-01T09:53:30.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.799+0000 7f181d7fa640 20 do_open: entering 2026-04-01T09:53:30.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.799+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.105 does not exist 2026-04-01T09:53:30.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.799+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.106 2026-04-01T09:53:30.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.799+0000 7f181cff9640 20 do_open: entering 2026-04-01T09:53:30.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.799+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.106 does not exist 2026-04-01T09:53:30.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.799+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.107 2026-04-01T09:53:30.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.799+0000 7f18254d2640 20 do_open: entering 2026-04-01T09:53:30.801 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.799+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.107 does not exist 2026-04-01T09:53:30.801 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.799+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.108 2026-04-01T09:53:30.801 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.799+0000 7f181f7fe640 20 do_open: entering 2026-04-01T09:53:30.801 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.799+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.108 does not exist 2026-04-01T09:53:30.801 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.799+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.109 2026-04-01T09:53:30.801 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.799+0000 7f181effd640 20 do_open: entering 2026-04-01T09:53:30.801 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.800+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.109 does not exist 2026-04-01T09:53:30.801 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.800+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.110 2026-04-01T09:53:30.801 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.800+0000 7f181e7fc640 20 do_open: entering 2026-04-01T09:53:30.801 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.800+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.110 does not exist 2026-04-01T09:53:30.801 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.800+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.111 2026-04-01T09:53:30.801 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.800+0000 7f182675b640 20 do_open: entering 2026-04-01T09:53:30.801 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.800+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.111 does not exist 2026-04-01T09:53:30.801 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.800+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.112 2026-04-01T09:53:30.801 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.800+0000 7f181dffb640 20 do_open: entering 2026-04-01T09:53:30.801 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.800+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.112 does not exist 2026-04-01T09:53:30.801 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.800+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.113 2026-04-01T09:53:30.801 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.800+0000 7f181d7fa640 20 do_open: entering 2026-04-01T09:53:30.802 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.800+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.113 does not exist 2026-04-01T09:53:30.802 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.800+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.114 2026-04-01T09:53:30.802 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.800+0000 7f181cff9640 20 do_open: entering 2026-04-01T09:53:30.802 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.800+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.114 does not exist 2026-04-01T09:53:30.802 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.800+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.115 2026-04-01T09:53:30.802 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.800+0000 7f18254d2640 20 do_open: entering 2026-04-01T09:53:30.802 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.801+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.115 does not exist 2026-04-01T09:53:30.802 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.801+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.116 2026-04-01T09:53:30.802 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.801+0000 7f181f7fe640 20 do_open: entering 2026-04-01T09:53:30.802 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.801+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.116 does not exist 2026-04-01T09:53:30.802 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.801+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.117 2026-04-01T09:53:30.802 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.801+0000 7f181effd640 20 do_open: entering 2026-04-01T09:53:30.802 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.801+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.117 does not exist 2026-04-01T09:53:30.802 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.801+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.118 2026-04-01T09:53:30.802 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.801+0000 7f181e7fc640 20 do_open: entering 2026-04-01T09:53:30.802 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.801+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.118 does not exist 2026-04-01T09:53:30.802 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.801+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.119 2026-04-01T09:53:30.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.801+0000 7f182675b640 20 do_open: entering 2026-04-01T09:53:30.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.801+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.119 does not exist 2026-04-01T09:53:30.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.801+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.120 2026-04-01T09:53:30.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.801+0000 7f181dffb640 20 do_open: entering 2026-04-01T09:53:30.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.802+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.120 does not exist 2026-04-01T09:53:30.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.802+0000 7f181d7fa640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.121 2026-04-01T09:53:30.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.802+0000 7f181d7fa640 20 do_open: entering 2026-04-01T09:53:30.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.802+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.121 does not exist 2026-04-01T09:53:30.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.802+0000 7f181cff9640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.122 2026-04-01T09:53:30.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.802+0000 7f181cff9640 20 do_open: entering 2026-04-01T09:53:30.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.802+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.122 does not exist 2026-04-01T09:53:30.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.802+0000 7f18254d2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.123 2026-04-01T09:53:30.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.802+0000 7f18254d2640 20 do_open: entering 2026-04-01T09:53:30.803 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.802+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.123 does not exist 2026-04-01T09:53:30.804 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.802+0000 7f181f7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.124 2026-04-01T09:53:30.804 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.802+0000 7f181f7fe640 20 do_open: entering 2026-04-01T09:53:30.804 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.802+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.124 does not exist 2026-04-01T09:53:30.804 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.802+0000 7f181effd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.125 2026-04-01T09:53:30.804 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.802+0000 7f181effd640 20 do_open: entering 2026-04-01T09:53:30.804 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.803+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.125 does not exist 2026-04-01T09:53:30.804 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.803+0000 7f181e7fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.126 2026-04-01T09:53:30.804 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.803+0000 7f181e7fc640 20 do_open: entering 2026-04-01T09:53:30.804 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.803+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.126 does not exist 2026-04-01T09:53:30.804 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.803+0000 7f182675b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.127 2026-04-01T09:53:30.804 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.803+0000 7f182675b640 20 do_open: entering 2026-04-01T09:53:30.804 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.803+0000 7f181dffb640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.127 does not exist 2026-04-01T09:53:30.804 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.803+0000 7f181dffb640 20 do_create: entering 2026-04-01T09:53:30.806 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.804+0000 7f181d7fa640 20 do_open: entering 2026-04-01T09:53:30.809 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.808+0000 7f1828754900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-04-01T09:53:30.809 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:30.808+0000 7f1828754900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-04-01T09:53:33.771 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:33.769+0000 7f1828754900 10 rgw_init_ioctx warning: failed to set recovery_priority on default.rgw.meta 2026-04-01T09:53:33.771 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:33.769+0000 7f1828754900 5 note: GC not initialized 2026-04-01T09:53:33.771 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:33.769+0000 7f17ccff1640 20 reqs_thread_entry: start 2026-04-01T09:53:33.844 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:33.843+0000 7f1828754900 20 init_complete bucket index max shards: 11 2026-04-01T09:53:33.844 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:33.843+0000 7f1828754900 20 Filter name: none 2026-04-01T09:53:33.844 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:33.843+0000 7f17c67fc640 20 reqs_thread_entry: start 2026-04-01T09:53:33.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:33.854+0000 7f1828754900 20 remove_watcher() i=4 2026-04-01T09:53:33.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:33.854+0000 7f1828754900 2 removed watcher, disabling cache 2026-04-01T09:53:33.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:33.854+0000 7f1828754900 20 remove_watcher() i=3 2026-04-01T09:53:33.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:33.855+0000 7f1828754900 20 remove_watcher() i=7 2026-04-01T09:53:33.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:33.855+0000 7f1828754900 20 remove_watcher() i=2 2026-04-01T09:53:33.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:33.855+0000 7f1828754900 20 remove_watcher() i=0 2026-04-01T09:53:33.856 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:33.855+0000 7f1828754900 20 remove_watcher() i=1 2026-04-01T09:53:33.857 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:33.855+0000 7f1828754900 20 remove_watcher() i=5 2026-04-01T09:53:33.857 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:33.856+0000 7f1828754900 20 remove_watcher() i=6 2026-04-01T09:53:33.865 INFO:teuthology.orchestra.run.vm00.stdout:[] 2026-04-01T09:53:33.865 DEBUG:tasks.util.rgw: json result: [] 2026-04-01T09:53:33.865 INFO:tasks.rgw:Configuring storage class = FROZEN 2026-04-01T09:53:33.865 INFO:tasks.util.rgw:rgwadmin: client.0 : ['zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN'] 2026-04-01T09:53:33.865 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.0', '--cluster', 'ceph', 'zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN'] 2026-04-01T09:53:33.865 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph zonegroup placement add --rgw-zone default --placement-id default-placement --storage-class FROZEN 2026-04-01T09:53:33.949 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-04-01T09:53:33.949 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-04-01T09:53:33.969 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:33.967+0000 7f66e776e900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-04-01T09:53:33.969 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:33.967+0000 7f66e776e900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-04-01T09:53:33.969 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:33.968+0000 7f668bfff640 20 reqs_thread_entry: start 2026-04-01T09:53:33.981 INFO:teuthology.orchestra.run.vm00.stdout:[{"key":"default-placement","val":{"name":"default-placement","tags":[],"storage_classes":["FROZEN","STANDARD"]}}] 2026-04-01T09:53:33.981 DEBUG:tasks.util.rgw: json result: [{'key': 'default-placement', 'val': {'name': 'default-placement', 'tags': [], 'storage_classes': ['FROZEN', 'STANDARD']}}] 2026-04-01T09:53:33.981 INFO:tasks.util.rgw:rgwadmin: client.0 : ['zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN', '--data-pool', 'default.rgw.buckets.data.frozen'] 2026-04-01T09:53:33.981 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.0', '--cluster', 'ceph', 'zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN', '--data-pool', 'default.rgw.buckets.data.frozen'] 2026-04-01T09:53:33.981 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph zone placement add --rgw-zone default --placement-id default-placement --storage-class FROZEN --data-pool default.rgw.buckets.data.frozen 2026-04-01T09:53:34.022 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-04-01T09:53:34.022 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-04-01T09:53:34.035 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:34.034+0000 7f9740da2900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-04-01T09:53:34.035 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:34.034+0000 7f9740da2900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-04-01T09:53:34.035 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:34.034+0000 7f96ecfe9640 20 reqs_thread_entry: start 2026-04-01T09:53:34.045 INFO:teuthology.orchestra.run.vm00.stdout:{"id":"0ece0ff1-62c2-4422-8d5e-5d6544aecfef","name":"default","domain_root":"default.rgw.meta:root","control_pool":"default.rgw.control","dedup_pool":"default.rgw.dedup","gc_pool":"default.rgw.log:gc","lc_pool":"default.rgw.log:lc","log_pool":"default.rgw.log","intent_log_pool":"default.rgw.log:intent","usage_log_pool":"default.rgw.log:usage","roles_pool":"default.rgw.meta:roles","reshard_pool":"default.rgw.log:reshard","user_keys_pool":"default.rgw.meta:users.keys","user_email_pool":"default.rgw.meta:users.email","user_swift_pool":"default.rgw.meta:users.swift","user_uid_pool":"default.rgw.meta:users.uid","otp_pool":"default.rgw.otp","notif_pool":"default.rgw.log:notif","topics_pool":"default.rgw.meta:topics","account_pool":"default.rgw.meta:accounts","group_pool":"default.rgw.meta:groups","system_key":{"access_key":"","secret_key":""},"placement_pools":[{"key":"default-placement","val":{"index_pool":"default.rgw.buckets.index","storage_classes":{"FROZEN":{"data_pool":"default.rgw.buckets.data.frozen"},"STANDARD":{"data_pool":"default.rgw.buckets.data"}},"data_extra_pool":"default.rgw.buckets.non-ec","index_type":0,"inline_data":true}}],"realm_id":"","restore_pool":"default.rgw.log:restore"} 2026-04-01T09:53:34.045 DEBUG:tasks.util.rgw: json result: {'id': '0ece0ff1-62c2-4422-8d5e-5d6544aecfef', 'name': 'default', 'domain_root': 'default.rgw.meta:root', 'control_pool': 'default.rgw.control', 'dedup_pool': 'default.rgw.dedup', 'gc_pool': 'default.rgw.log:gc', 'lc_pool': 'default.rgw.log:lc', 'log_pool': 'default.rgw.log', 'intent_log_pool': 'default.rgw.log:intent', 'usage_log_pool': 'default.rgw.log:usage', 'roles_pool': 'default.rgw.meta:roles', 'reshard_pool': 'default.rgw.log:reshard', 'user_keys_pool': 'default.rgw.meta:users.keys', 'user_email_pool': 'default.rgw.meta:users.email', 'user_swift_pool': 'default.rgw.meta:users.swift', 'user_uid_pool': 'default.rgw.meta:users.uid', 'otp_pool': 'default.rgw.otp', 'notif_pool': 'default.rgw.log:notif', 'topics_pool': 'default.rgw.meta:topics', 'account_pool': 'default.rgw.meta:accounts', 'group_pool': 'default.rgw.meta:groups', 'system_key': {'access_key': '', 'secret_key': ''}, 'placement_pools': [{'key': 'default-placement', 'val': {'index_pool': 'default.rgw.buckets.index', 'storage_classes': {'FROZEN': {'data_pool': 'default.rgw.buckets.data.frozen'}, 'STANDARD': {'data_pool': 'default.rgw.buckets.data'}}, 'data_extra_pool': 'default.rgw.buckets.non-ec', 'index_type': 0, 'inline_data': True}}], 'realm_id': '', 'restore_pool': 'default.rgw.log:restore'} 2026-04-01T09:53:34.045 INFO:tasks.rgw:Configuring storage class = LUKEWARM 2026-04-01T09:53:34.045 INFO:tasks.util.rgw:rgwadmin: client.0 : ['zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM'] 2026-04-01T09:53:34.045 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.0', '--cluster', 'ceph', 'zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM'] 2026-04-01T09:53:34.046 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph zonegroup placement add --rgw-zone default --placement-id default-placement --storage-class LUKEWARM 2026-04-01T09:53:34.128 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-04-01T09:53:34.128 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-04-01T09:53:34.141 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:34.139+0000 7fa1c9e12900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-04-01T09:53:34.141 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:34.139+0000 7fa1c9e12900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-04-01T09:53:34.141 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:34.140+0000 7fa1737e6640 20 reqs_thread_entry: start 2026-04-01T09:53:34.150 INFO:teuthology.orchestra.run.vm00.stdout:[{"key":"default-placement","val":{"name":"default-placement","tags":[],"storage_classes":["FROZEN","LUKEWARM","STANDARD"]}}] 2026-04-01T09:53:34.150 DEBUG:tasks.util.rgw: json result: [{'key': 'default-placement', 'val': {'name': 'default-placement', 'tags': [], 'storage_classes': ['FROZEN', 'LUKEWARM', 'STANDARD']}}] 2026-04-01T09:53:34.150 INFO:tasks.util.rgw:rgwadmin: client.0 : ['zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM', '--data-pool', 'default.rgw.buckets.data.lukewarm'] 2026-04-01T09:53:34.150 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.0', '--cluster', 'ceph', 'zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM', '--data-pool', 'default.rgw.buckets.data.lukewarm'] 2026-04-01T09:53:34.150 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph zone placement add --rgw-zone default --placement-id default-placement --storage-class LUKEWARM --data-pool default.rgw.buckets.data.lukewarm 2026-04-01T09:53:34.234 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-04-01T09:53:34.235 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-04-01T09:53:34.250 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:34.248+0000 7f5bc73c9900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-04-01T09:53:34.250 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:34.248+0000 7f5bc73c9900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-04-01T09:53:34.250 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:53:34.249+0000 7f5b6bfff640 20 reqs_thread_entry: start 2026-04-01T09:53:34.262 INFO:teuthology.orchestra.run.vm00.stdout:{"id":"0ece0ff1-62c2-4422-8d5e-5d6544aecfef","name":"default","domain_root":"default.rgw.meta:root","control_pool":"default.rgw.control","dedup_pool":"default.rgw.dedup","gc_pool":"default.rgw.log:gc","lc_pool":"default.rgw.log:lc","log_pool":"default.rgw.log","intent_log_pool":"default.rgw.log:intent","usage_log_pool":"default.rgw.log:usage","roles_pool":"default.rgw.meta:roles","reshard_pool":"default.rgw.log:reshard","user_keys_pool":"default.rgw.meta:users.keys","user_email_pool":"default.rgw.meta:users.email","user_swift_pool":"default.rgw.meta:users.swift","user_uid_pool":"default.rgw.meta:users.uid","otp_pool":"default.rgw.otp","notif_pool":"default.rgw.log:notif","topics_pool":"default.rgw.meta:topics","account_pool":"default.rgw.meta:accounts","group_pool":"default.rgw.meta:groups","system_key":{"access_key":"","secret_key":""},"placement_pools":[{"key":"default-placement","val":{"index_pool":"default.rgw.buckets.index","storage_classes":{"FROZEN":{"data_pool":"default.rgw.buckets.data.frozen"},"LUKEWARM":{"data_pool":"default.rgw.buckets.data.lukewarm"},"STANDARD":{"data_pool":"default.rgw.buckets.data"}},"data_extra_pool":"default.rgw.buckets.non-ec","index_type":0,"inline_data":true}}],"realm_id":"","restore_pool":"default.rgw.log:restore"} 2026-04-01T09:53:34.262 DEBUG:tasks.util.rgw: json result: {'id': '0ece0ff1-62c2-4422-8d5e-5d6544aecfef', 'name': 'default', 'domain_root': 'default.rgw.meta:root', 'control_pool': 'default.rgw.control', 'dedup_pool': 'default.rgw.dedup', 'gc_pool': 'default.rgw.log:gc', 'lc_pool': 'default.rgw.log:lc', 'log_pool': 'default.rgw.log', 'intent_log_pool': 'default.rgw.log:intent', 'usage_log_pool': 'default.rgw.log:usage', 'roles_pool': 'default.rgw.meta:roles', 'reshard_pool': 'default.rgw.log:reshard', 'user_keys_pool': 'default.rgw.meta:users.keys', 'user_email_pool': 'default.rgw.meta:users.email', 'user_swift_pool': 'default.rgw.meta:users.swift', 'user_uid_pool': 'default.rgw.meta:users.uid', 'otp_pool': 'default.rgw.otp', 'notif_pool': 'default.rgw.log:notif', 'topics_pool': 'default.rgw.meta:topics', 'account_pool': 'default.rgw.meta:accounts', 'group_pool': 'default.rgw.meta:groups', 'system_key': {'access_key': '', 'secret_key': ''}, 'placement_pools': [{'key': 'default-placement', 'val': {'index_pool': 'default.rgw.buckets.index', 'storage_classes': {'FROZEN': {'data_pool': 'default.rgw.buckets.data.frozen'}, 'LUKEWARM': {'data_pool': 'default.rgw.buckets.data.lukewarm'}, 'STANDARD': {'data_pool': 'default.rgw.buckets.data'}}, 'data_extra_pool': 'default.rgw.buckets.non-ec', 'index_type': 0, 'inline_data': True}}], 'realm_id': '', 'restore_pool': 'default.rgw.log:restore'} 2026-04-01T09:53:34.262 INFO:tasks.util.rgw:rgwadmin: client.1 : ['user', 'list'] 2026-04-01T09:53:34.262 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.1', '--cluster', 'ceph', 'user', 'list'] 2026-04-01T09:53:34.262 DEBUG:teuthology.orchestra.run.vm03:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.1 --cluster ceph user list 2026-04-01T09:53:34.301 INFO:teuthology.orchestra.run.vm03.stderr:ignoring --setuser ceph since I am not root 2026-04-01T09:53:34.302 INFO:teuthology.orchestra.run.vm03.stderr:ignoring --setgroup ceph since I am not root 2026-04-01T09:53:34.320 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.320+0000 7f49d7d9f900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.321 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.321+0000 7f49d7d9f900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:34.321 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.321+0000 7f49d7d9f900 20 realm 2026-04-01T09:53:34.321 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.321+0000 7f49d7d9f900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.321 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.321+0000 7f49d7d9f900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:34.321 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.321+0000 7f49d7d9f900 4 RGWPeriod::init failed to init realm id : (2) No such file or directory 2026-04-01T09:53:34.321 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.321+0000 7f49d7d9f900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.322 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.321+0000 7f49d7d9f900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:34.322 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.321+0000 7f49d7d9f900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.322 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.322+0000 7f49d7d9f900 20 rados_obj.operate() r=0 bl.length=46 2026-04-01T09:53:34.322 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.322+0000 7f49d7d9f900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.323 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.323+0000 7f49d7d9f900 20 rados_obj.operate() r=0 bl.length=1190 2026-04-01T09:53:34.323 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.323+0000 7f49d7d9f900 20 searching for the correct realm 2026-04-01T09:53:34.333 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.332+0000 7f49d7d9f900 20 RGWRados::pool_iterate: got zonegroup_info.9b133e09-d229-457d-a14c-135340d8b7fa 2026-04-01T09:53:34.333 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.332+0000 7f49d7d9f900 20 RGWRados::pool_iterate: got default.zonegroup. 2026-04-01T09:53:34.333 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.332+0000 7f49d7d9f900 20 RGWRados::pool_iterate: got zone_info.0ece0ff1-62c2-4422-8d5e-5d6544aecfef 2026-04-01T09:53:34.333 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.332+0000 7f49d7d9f900 20 RGWRados::pool_iterate: got default.zone. 2026-04-01T09:53:34.333 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.332+0000 7f49d7d9f900 20 RGWRados::pool_iterate: got zone_names.default 2026-04-01T09:53:34.333 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.332+0000 7f49d7d9f900 20 RGWRados::pool_iterate: got zonegroups_names.default 2026-04-01T09:53:34.333 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.332+0000 7f49d7d9f900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.333 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.333+0000 7f49d7d9f900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:34.333 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.333+0000 7f49d7d9f900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.333 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.333+0000 7f49d7d9f900 20 rados_obj.operate() r=0 bl.length=46 2026-04-01T09:53:34.333 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.333+0000 7f49d7d9f900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.334 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.333+0000 7f49d7d9f900 20 rados_obj.operate() r=0 bl.length=470 2026-04-01T09:53:34.334 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.333+0000 7f49d7d9f900 20 zone default found 2026-04-01T09:53:34.334 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.333+0000 7f49d7d9f900 4 Realm: () 2026-04-01T09:53:34.334 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.333+0000 7f49d7d9f900 4 ZoneGroup: default (9b133e09-d229-457d-a14c-135340d8b7fa) 2026-04-01T09:53:34.334 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.333+0000 7f49d7d9f900 4 Zone: default (0ece0ff1-62c2-4422-8d5e-5d6544aecfef) 2026-04-01T09:53:34.334 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.333+0000 7f49d7d9f900 10 cannot find current period zonegroup using local zonegroup configuration 2026-04-01T09:53:34.334 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.333+0000 7f49d7d9f900 20 zonegroup default 2026-04-01T09:53:34.334 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.333+0000 7f49d7d9f900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.334 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.333+0000 7f49d7d9f900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:34.334 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.333+0000 7f49d7d9f900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.334 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.334+0000 7f49d7d9f900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:34.334 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.334+0000 7f49d7d9f900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.334 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.334+0000 7f49d7d9f900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:34.334 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.334+0000 7f49d7d9f900 20 started sync module instance, tier type = 2026-04-01T09:53:34.334 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.334+0000 7f49d7d9f900 20 started zone id=0ece0ff1-62c2-4422-8d5e-5d6544aecfef (name=default) with tier type = 2026-04-01T09:53:34.337 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.337+0000 7f49d7d9f900 20 add_watcher() i=2 2026-04-01T09:53:34.338 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.338+0000 7f49d7d9f900 20 add_watcher() i=0 2026-04-01T09:53:34.338 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.338+0000 7f49d7d9f900 20 add_watcher() i=3 2026-04-01T09:53:34.339 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.338+0000 7f49d7d9f900 20 add_watcher() i=7 2026-04-01T09:53:34.339 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.339+0000 7f49d7d9f900 20 add_watcher() i=4 2026-04-01T09:53:34.339 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.339+0000 7f49d7d9f900 20 add_watcher() i=1 2026-04-01T09:53:34.339 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.339+0000 7f49d7d9f900 20 add_watcher() i=5 2026-04-01T09:53:34.339 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.339+0000 7f49d7d9f900 20 add_watcher() i=6 2026-04-01T09:53:34.340 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.339+0000 7f49d7d9f900 2 all 8 watchers are set, enabling cache 2026-04-01T09:53:34.342 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.342+0000 7f49d7d9f900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-04-01T09:53:34.342 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.342+0000 7f49d7d9f900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-04-01T09:53:34.342 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.342+0000 7f49d7d9f900 5 note: GC not initialized 2026-04-01T09:53:34.342 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.342+0000 7f4980ff1640 20 reqs_thread_entry: start 2026-04-01T09:53:34.385 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.384+0000 7f49d7d9f900 20 init_complete bucket index max shards: 11 2026-04-01T09:53:34.385 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.384+0000 7f49d7d9f900 20 Filter name: none 2026-04-01T09:53:34.385 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.384+0000 7f497a7fc640 20 reqs_thread_entry: start 2026-04-01T09:53:34.395 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.395+0000 7f49d7d9f900 20 remove_watcher() i=0 2026-04-01T09:53:34.395 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.395+0000 7f49d7d9f900 2 removed watcher, disabling cache 2026-04-01T09:53:34.395 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.395+0000 7f49d7d9f900 20 remove_watcher() i=1 2026-04-01T09:53:34.395 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.395+0000 7f49d7d9f900 20 remove_watcher() i=2 2026-04-01T09:53:34.396 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.395+0000 7f49d7d9f900 20 remove_watcher() i=3 2026-04-01T09:53:34.396 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.395+0000 7f49d7d9f900 20 remove_watcher() i=4 2026-04-01T09:53:34.396 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.396+0000 7f49d7d9f900 20 remove_watcher() i=7 2026-04-01T09:53:34.397 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.396+0000 7f49d7d9f900 20 remove_watcher() i=6 2026-04-01T09:53:34.397 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.396+0000 7f49d7d9f900 20 remove_watcher() i=5 2026-04-01T09:53:34.404 INFO:teuthology.orchestra.run.vm03.stdout:[] 2026-04-01T09:53:34.405 DEBUG:tasks.util.rgw: json result: [] 2026-04-01T09:53:34.405 INFO:tasks.rgw:Configuring storage class = FROZEN 2026-04-01T09:53:34.405 INFO:tasks.util.rgw:rgwadmin: client.1 : ['zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN'] 2026-04-01T09:53:34.405 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.1', '--cluster', 'ceph', 'zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN'] 2026-04-01T09:53:34.405 DEBUG:teuthology.orchestra.run.vm03:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.1 --cluster ceph zonegroup placement add --rgw-zone default --placement-id default-placement --storage-class FROZEN 2026-04-01T09:53:34.484 INFO:teuthology.orchestra.run.vm03.stderr:ignoring --setuser ceph since I am not root 2026-04-01T09:53:34.484 INFO:teuthology.orchestra.run.vm03.stderr:ignoring --setgroup ceph since I am not root 2026-04-01T09:53:34.500 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.500+0000 7f8f15b22900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-04-01T09:53:34.500 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.500+0000 7f8f15b22900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-04-01T09:53:34.500 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.500+0000 7f8ebffef640 20 reqs_thread_entry: start 2026-04-01T09:53:34.510 INFO:teuthology.orchestra.run.vm03.stdout:[{"key":"default-placement","val":{"name":"default-placement","tags":[],"storage_classes":["FROZEN","LUKEWARM","STANDARD"]}}] 2026-04-01T09:53:34.511 DEBUG:tasks.util.rgw: json result: [{'key': 'default-placement', 'val': {'name': 'default-placement', 'tags': [], 'storage_classes': ['FROZEN', 'LUKEWARM', 'STANDARD']}}] 2026-04-01T09:53:34.511 INFO:tasks.util.rgw:rgwadmin: client.1 : ['zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN', '--data-pool', 'default.rgw.buckets.data.frozen'] 2026-04-01T09:53:34.511 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.1', '--cluster', 'ceph', 'zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN', '--data-pool', 'default.rgw.buckets.data.frozen'] 2026-04-01T09:53:34.511 DEBUG:teuthology.orchestra.run.vm03:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.1 --cluster ceph zone placement add --rgw-zone default --placement-id default-placement --storage-class FROZEN --data-pool default.rgw.buckets.data.frozen 2026-04-01T09:53:34.592 INFO:teuthology.orchestra.run.vm03.stderr:ignoring --setuser ceph since I am not root 2026-04-01T09:53:34.592 INFO:teuthology.orchestra.run.vm03.stderr:ignoring --setgroup ceph since I am not root 2026-04-01T09:53:34.605 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.604+0000 7fceae766900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-04-01T09:53:34.605 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.604+0000 7fceae766900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-04-01T09:53:34.605 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.605+0000 7fce537fe640 20 reqs_thread_entry: start 2026-04-01T09:53:34.617 INFO:teuthology.orchestra.run.vm03.stdout:{"id":"0ece0ff1-62c2-4422-8d5e-5d6544aecfef","name":"default","domain_root":"default.rgw.meta:root","control_pool":"default.rgw.control","dedup_pool":"default.rgw.dedup","gc_pool":"default.rgw.log:gc","lc_pool":"default.rgw.log:lc","log_pool":"default.rgw.log","intent_log_pool":"default.rgw.log:intent","usage_log_pool":"default.rgw.log:usage","roles_pool":"default.rgw.meta:roles","reshard_pool":"default.rgw.log:reshard","user_keys_pool":"default.rgw.meta:users.keys","user_email_pool":"default.rgw.meta:users.email","user_swift_pool":"default.rgw.meta:users.swift","user_uid_pool":"default.rgw.meta:users.uid","otp_pool":"default.rgw.otp","notif_pool":"default.rgw.log:notif","topics_pool":"default.rgw.meta:topics","account_pool":"default.rgw.meta:accounts","group_pool":"default.rgw.meta:groups","system_key":{"access_key":"","secret_key":""},"placement_pools":[{"key":"default-placement","val":{"index_pool":"default.rgw.buckets.index","storage_classes":{"FROZEN":{"data_pool":"default.rgw.buckets.data.frozen"},"LUKEWARM":{"data_pool":"default.rgw.buckets.data.lukewarm"},"STANDARD":{"data_pool":"default.rgw.buckets.data"}},"data_extra_pool":"default.rgw.buckets.non-ec","index_type":0,"inline_data":true}}],"realm_id":"","restore_pool":"default.rgw.log:restore"} 2026-04-01T09:53:34.617 DEBUG:tasks.util.rgw: json result: {'id': '0ece0ff1-62c2-4422-8d5e-5d6544aecfef', 'name': 'default', 'domain_root': 'default.rgw.meta:root', 'control_pool': 'default.rgw.control', 'dedup_pool': 'default.rgw.dedup', 'gc_pool': 'default.rgw.log:gc', 'lc_pool': 'default.rgw.log:lc', 'log_pool': 'default.rgw.log', 'intent_log_pool': 'default.rgw.log:intent', 'usage_log_pool': 'default.rgw.log:usage', 'roles_pool': 'default.rgw.meta:roles', 'reshard_pool': 'default.rgw.log:reshard', 'user_keys_pool': 'default.rgw.meta:users.keys', 'user_email_pool': 'default.rgw.meta:users.email', 'user_swift_pool': 'default.rgw.meta:users.swift', 'user_uid_pool': 'default.rgw.meta:users.uid', 'otp_pool': 'default.rgw.otp', 'notif_pool': 'default.rgw.log:notif', 'topics_pool': 'default.rgw.meta:topics', 'account_pool': 'default.rgw.meta:accounts', 'group_pool': 'default.rgw.meta:groups', 'system_key': {'access_key': '', 'secret_key': ''}, 'placement_pools': [{'key': 'default-placement', 'val': {'index_pool': 'default.rgw.buckets.index', 'storage_classes': {'FROZEN': {'data_pool': 'default.rgw.buckets.data.frozen'}, 'LUKEWARM': {'data_pool': 'default.rgw.buckets.data.lukewarm'}, 'STANDARD': {'data_pool': 'default.rgw.buckets.data'}}, 'data_extra_pool': 'default.rgw.buckets.non-ec', 'index_type': 0, 'inline_data': True}}], 'realm_id': '', 'restore_pool': 'default.rgw.log:restore'} 2026-04-01T09:53:34.617 INFO:tasks.rgw:Configuring storage class = LUKEWARM 2026-04-01T09:53:34.618 INFO:tasks.util.rgw:rgwadmin: client.1 : ['zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM'] 2026-04-01T09:53:34.618 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.1', '--cluster', 'ceph', 'zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM'] 2026-04-01T09:53:34.618 DEBUG:teuthology.orchestra.run.vm03:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.1 --cluster ceph zonegroup placement add --rgw-zone default --placement-id default-placement --storage-class LUKEWARM 2026-04-01T09:53:34.706 INFO:teuthology.orchestra.run.vm03.stderr:ignoring --setuser ceph since I am not root 2026-04-01T09:53:34.706 INFO:teuthology.orchestra.run.vm03.stderr:ignoring --setgroup ceph since I am not root 2026-04-01T09:53:34.721 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.720+0000 7fb5ca122900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-04-01T09:53:34.721 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.720+0000 7fb5ca122900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-04-01T09:53:34.721 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.720+0000 7fb5737ee640 20 reqs_thread_entry: start 2026-04-01T09:53:34.733 INFO:teuthology.orchestra.run.vm03.stdout:[{"key":"default-placement","val":{"name":"default-placement","tags":[],"storage_classes":["FROZEN","LUKEWARM","STANDARD"]}}] 2026-04-01T09:53:34.733 DEBUG:tasks.util.rgw: json result: [{'key': 'default-placement', 'val': {'name': 'default-placement', 'tags': [], 'storage_classes': ['FROZEN', 'LUKEWARM', 'STANDARD']}}] 2026-04-01T09:53:34.733 INFO:tasks.util.rgw:rgwadmin: client.1 : ['zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM', '--data-pool', 'default.rgw.buckets.data.lukewarm'] 2026-04-01T09:53:34.733 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.1', '--cluster', 'ceph', 'zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM', '--data-pool', 'default.rgw.buckets.data.lukewarm'] 2026-04-01T09:53:34.733 DEBUG:teuthology.orchestra.run.vm03:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.1 --cluster ceph zone placement add --rgw-zone default --placement-id default-placement --storage-class LUKEWARM --data-pool default.rgw.buckets.data.lukewarm 2026-04-01T09:53:34.776 INFO:teuthology.orchestra.run.vm03.stderr:ignoring --setuser ceph since I am not root 2026-04-01T09:53:34.776 INFO:teuthology.orchestra.run.vm03.stderr:ignoring --setgroup ceph since I am not root 2026-04-01T09:53:34.793 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.792+0000 7fcd0875a900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-04-01T09:53:34.793 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.792+0000 7fcd0875a900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-04-01T09:53:34.793 INFO:teuthology.orchestra.run.vm03.stderr:2026-04-01T09:53:34.792+0000 7fccb17f2640 20 reqs_thread_entry: start 2026-04-01T09:53:34.804 INFO:teuthology.orchestra.run.vm03.stdout:{"id":"0ece0ff1-62c2-4422-8d5e-5d6544aecfef","name":"default","domain_root":"default.rgw.meta:root","control_pool":"default.rgw.control","dedup_pool":"default.rgw.dedup","gc_pool":"default.rgw.log:gc","lc_pool":"default.rgw.log:lc","log_pool":"default.rgw.log","intent_log_pool":"default.rgw.log:intent","usage_log_pool":"default.rgw.log:usage","roles_pool":"default.rgw.meta:roles","reshard_pool":"default.rgw.log:reshard","user_keys_pool":"default.rgw.meta:users.keys","user_email_pool":"default.rgw.meta:users.email","user_swift_pool":"default.rgw.meta:users.swift","user_uid_pool":"default.rgw.meta:users.uid","otp_pool":"default.rgw.otp","notif_pool":"default.rgw.log:notif","topics_pool":"default.rgw.meta:topics","account_pool":"default.rgw.meta:accounts","group_pool":"default.rgw.meta:groups","system_key":{"access_key":"","secret_key":""},"placement_pools":[{"key":"default-placement","val":{"index_pool":"default.rgw.buckets.index","storage_classes":{"FROZEN":{"data_pool":"default.rgw.buckets.data.frozen"},"LUKEWARM":{"data_pool":"default.rgw.buckets.data.lukewarm"},"STANDARD":{"data_pool":"default.rgw.buckets.data"}},"data_extra_pool":"default.rgw.buckets.non-ec","index_type":0,"inline_data":true}}],"realm_id":"","restore_pool":"default.rgw.log:restore"} 2026-04-01T09:53:34.805 DEBUG:tasks.util.rgw: json result: {'id': '0ece0ff1-62c2-4422-8d5e-5d6544aecfef', 'name': 'default', 'domain_root': 'default.rgw.meta:root', 'control_pool': 'default.rgw.control', 'dedup_pool': 'default.rgw.dedup', 'gc_pool': 'default.rgw.log:gc', 'lc_pool': 'default.rgw.log:lc', 'log_pool': 'default.rgw.log', 'intent_log_pool': 'default.rgw.log:intent', 'usage_log_pool': 'default.rgw.log:usage', 'roles_pool': 'default.rgw.meta:roles', 'reshard_pool': 'default.rgw.log:reshard', 'user_keys_pool': 'default.rgw.meta:users.keys', 'user_email_pool': 'default.rgw.meta:users.email', 'user_swift_pool': 'default.rgw.meta:users.swift', 'user_uid_pool': 'default.rgw.meta:users.uid', 'otp_pool': 'default.rgw.otp', 'notif_pool': 'default.rgw.log:notif', 'topics_pool': 'default.rgw.meta:topics', 'account_pool': 'default.rgw.meta:accounts', 'group_pool': 'default.rgw.meta:groups', 'system_key': {'access_key': '', 'secret_key': ''}, 'placement_pools': [{'key': 'default-placement', 'val': {'index_pool': 'default.rgw.buckets.index', 'storage_classes': {'FROZEN': {'data_pool': 'default.rgw.buckets.data.frozen'}, 'LUKEWARM': {'data_pool': 'default.rgw.buckets.data.lukewarm'}, 'STANDARD': {'data_pool': 'default.rgw.buckets.data'}}, 'data_extra_pool': 'default.rgw.buckets.non-ec', 'index_type': 0, 'inline_data': True}}], 'realm_id': '', 'restore_pool': 'default.rgw.log:restore'} 2026-04-01T09:53:34.805 INFO:tasks.util.rgw:rgwadmin: client.2 : ['user', 'list'] 2026-04-01T09:53:34.805 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.2', '--cluster', 'ceph', 'user', 'list'] 2026-04-01T09:53:34.805 DEBUG:teuthology.orchestra.run.vm07:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.2 --cluster ceph user list 2026-04-01T09:53:34.848 INFO:teuthology.orchestra.run.vm07.stderr:ignoring --setuser ceph since I am not root 2026-04-01T09:53:34.848 INFO:teuthology.orchestra.run.vm07.stderr:ignoring --setgroup ceph since I am not root 2026-04-01T09:53:34.870 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.869+0000 7f86b698b900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.870+0000 7f86b698b900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:34.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.870+0000 7f86b698b900 20 realm 2026-04-01T09:53:34.871 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.870+0000 7f86b698b900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.871+0000 7f86b698b900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:34.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.871+0000 7f86b698b900 4 RGWPeriod::init failed to init realm id : (2) No such file or directory 2026-04-01T09:53:34.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.871+0000 7f86b698b900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.871+0000 7f86b698b900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:34.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.871+0000 7f86b698b900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.872+0000 7f86b698b900 20 rados_obj.operate() r=0 bl.length=46 2026-04-01T09:53:34.872 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.872+0000 7f86b698b900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.873 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.873+0000 7f86b698b900 20 rados_obj.operate() r=0 bl.length=1190 2026-04-01T09:53:34.873 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.873+0000 7f86b698b900 20 searching for the correct realm 2026-04-01T09:53:34.884 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.884+0000 7f86b698b900 20 RGWRados::pool_iterate: got zonegroup_info.9b133e09-d229-457d-a14c-135340d8b7fa 2026-04-01T09:53:34.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.884+0000 7f86b698b900 20 RGWRados::pool_iterate: got default.zonegroup. 2026-04-01T09:53:34.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.884+0000 7f86b698b900 20 RGWRados::pool_iterate: got zone_info.0ece0ff1-62c2-4422-8d5e-5d6544aecfef 2026-04-01T09:53:34.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.884+0000 7f86b698b900 20 RGWRados::pool_iterate: got default.zone. 2026-04-01T09:53:34.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.884+0000 7f86b698b900 20 RGWRados::pool_iterate: got zone_names.default 2026-04-01T09:53:34.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.884+0000 7f86b698b900 20 RGWRados::pool_iterate: got zonegroups_names.default 2026-04-01T09:53:34.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.884+0000 7f86b698b900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.884+0000 7f86b698b900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:34.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.884+0000 7f86b698b900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.884+0000 7f86b698b900 20 rados_obj.operate() r=0 bl.length=46 2026-04-01T09:53:34.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.884+0000 7f86b698b900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.885+0000 7f86b698b900 20 rados_obj.operate() r=0 bl.length=470 2026-04-01T09:53:34.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.885+0000 7f86b698b900 20 zone default found 2026-04-01T09:53:34.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.885+0000 7f86b698b900 4 Realm: () 2026-04-01T09:53:34.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.885+0000 7f86b698b900 4 ZoneGroup: default (9b133e09-d229-457d-a14c-135340d8b7fa) 2026-04-01T09:53:34.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.885+0000 7f86b698b900 4 Zone: default (0ece0ff1-62c2-4422-8d5e-5d6544aecfef) 2026-04-01T09:53:34.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.885+0000 7f86b698b900 10 cannot find current period zonegroup using local zonegroup configuration 2026-04-01T09:53:34.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.885+0000 7f86b698b900 20 zonegroup default 2026-04-01T09:53:34.885 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.885+0000 7f86b698b900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.885+0000 7f86b698b900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:34.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.885+0000 7f86b698b900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.885+0000 7f86b698b900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:34.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.885+0000 7f86b698b900 20 rados->read ofs=0 len=0 2026-04-01T09:53:34.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.885+0000 7f86b698b900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:53:34.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.886+0000 7f86b698b900 20 started sync module instance, tier type = 2026-04-01T09:53:34.886 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.886+0000 7f86b698b900 20 started zone id=0ece0ff1-62c2-4422-8d5e-5d6544aecfef (name=default) with tier type = 2026-04-01T09:53:34.890 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.889+0000 7f86b698b900 20 add_watcher() i=6 2026-04-01T09:53:34.890 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.889+0000 7f86b698b900 20 add_watcher() i=0 2026-04-01T09:53:34.890 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.890+0000 7f86b698b900 20 add_watcher() i=2 2026-04-01T09:53:34.891 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.890+0000 7f86b698b900 20 add_watcher() i=5 2026-04-01T09:53:34.891 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.890+0000 7f86b698b900 20 add_watcher() i=3 2026-04-01T09:53:34.891 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.890+0000 7f86b698b900 20 add_watcher() i=7 2026-04-01T09:53:34.891 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.890+0000 7f86b698b900 20 add_watcher() i=1 2026-04-01T09:53:34.891 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.891+0000 7f86b698b900 20 add_watcher() i=4 2026-04-01T09:53:34.891 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.891+0000 7f86b698b900 2 all 8 watchers are set, enabling cache 2026-04-01T09:53:34.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.893+0000 7f86b698b900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-04-01T09:53:34.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.893+0000 7f86b698b900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-04-01T09:53:34.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.893+0000 7f86b698b900 5 note: GC not initialized 2026-04-01T09:53:34.894 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.894+0000 7f865ffef640 20 reqs_thread_entry: start 2026-04-01T09:53:34.941 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.940+0000 7f86b698b900 20 init_complete bucket index max shards: 11 2026-04-01T09:53:34.941 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.940+0000 7f86b698b900 20 Filter name: none 2026-04-01T09:53:34.941 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.940+0000 7f865dfeb640 20 reqs_thread_entry: start 2026-04-01T09:53:34.953 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.952+0000 7f86b698b900 20 remove_watcher() i=0 2026-04-01T09:53:34.953 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.952+0000 7f86b698b900 2 removed watcher, disabling cache 2026-04-01T09:53:34.953 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.952+0000 7f86b698b900 20 remove_watcher() i=2 2026-04-01T09:53:34.953 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.952+0000 7f86b698b900 20 remove_watcher() i=3 2026-04-01T09:53:34.953 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.952+0000 7f86b698b900 20 remove_watcher() i=7 2026-04-01T09:53:34.953 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.953+0000 7f86b698b900 20 remove_watcher() i=4 2026-04-01T09:53:34.953 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.953+0000 7f86b698b900 20 remove_watcher() i=5 2026-04-01T09:53:34.953 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.953+0000 7f86b698b900 20 remove_watcher() i=6 2026-04-01T09:53:34.954 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:34.953+0000 7f86b698b900 20 remove_watcher() i=1 2026-04-01T09:53:34.960 INFO:teuthology.orchestra.run.vm07.stdout:[] 2026-04-01T09:53:34.960 DEBUG:tasks.util.rgw: json result: [] 2026-04-01T09:53:34.960 INFO:tasks.rgw:Configuring storage class = FROZEN 2026-04-01T09:53:34.960 INFO:tasks.util.rgw:rgwadmin: client.2 : ['zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN'] 2026-04-01T09:53:34.960 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.2', '--cluster', 'ceph', 'zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN'] 2026-04-01T09:53:34.960 DEBUG:teuthology.orchestra.run.vm07:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.2 --cluster ceph zonegroup placement add --rgw-zone default --placement-id default-placement --storage-class FROZEN 2026-04-01T09:53:35.000 INFO:teuthology.orchestra.run.vm07.stderr:ignoring --setuser ceph since I am not root 2026-04-01T09:53:35.000 INFO:teuthology.orchestra.run.vm07.stderr:ignoring --setgroup ceph since I am not root 2026-04-01T09:53:35.016 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:35.015+0000 7f6696d75900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-04-01T09:53:35.016 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:35.015+0000 7f6696d75900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-04-01T09:53:35.016 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:35.015+0000 7f6641ff3640 20 reqs_thread_entry: start 2026-04-01T09:53:35.027 INFO:teuthology.orchestra.run.vm07.stdout:[{"key":"default-placement","val":{"name":"default-placement","tags":[],"storage_classes":["FROZEN","LUKEWARM","STANDARD"]}}] 2026-04-01T09:53:35.027 DEBUG:tasks.util.rgw: json result: [{'key': 'default-placement', 'val': {'name': 'default-placement', 'tags': [], 'storage_classes': ['FROZEN', 'LUKEWARM', 'STANDARD']}}] 2026-04-01T09:53:35.027 INFO:tasks.util.rgw:rgwadmin: client.2 : ['zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN', '--data-pool', 'default.rgw.buckets.data.frozen'] 2026-04-01T09:53:35.027 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.2', '--cluster', 'ceph', 'zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN', '--data-pool', 'default.rgw.buckets.data.frozen'] 2026-04-01T09:53:35.027 DEBUG:teuthology.orchestra.run.vm07:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.2 --cluster ceph zone placement add --rgw-zone default --placement-id default-placement --storage-class FROZEN --data-pool default.rgw.buckets.data.frozen 2026-04-01T09:53:35.111 INFO:teuthology.orchestra.run.vm07.stderr:ignoring --setuser ceph since I am not root 2026-04-01T09:53:35.111 INFO:teuthology.orchestra.run.vm07.stderr:ignoring --setgroup ceph since I am not root 2026-04-01T09:53:35.126 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:35.125+0000 7f942fd5f900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-04-01T09:53:35.126 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:35.125+0000 7f942fd5f900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-04-01T09:53:35.126 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:35.126+0000 7f93d97f2640 20 reqs_thread_entry: start 2026-04-01T09:53:35.138 INFO:teuthology.orchestra.run.vm07.stdout:{"id":"0ece0ff1-62c2-4422-8d5e-5d6544aecfef","name":"default","domain_root":"default.rgw.meta:root","control_pool":"default.rgw.control","dedup_pool":"default.rgw.dedup","gc_pool":"default.rgw.log:gc","lc_pool":"default.rgw.log:lc","log_pool":"default.rgw.log","intent_log_pool":"default.rgw.log:intent","usage_log_pool":"default.rgw.log:usage","roles_pool":"default.rgw.meta:roles","reshard_pool":"default.rgw.log:reshard","user_keys_pool":"default.rgw.meta:users.keys","user_email_pool":"default.rgw.meta:users.email","user_swift_pool":"default.rgw.meta:users.swift","user_uid_pool":"default.rgw.meta:users.uid","otp_pool":"default.rgw.otp","notif_pool":"default.rgw.log:notif","topics_pool":"default.rgw.meta:topics","account_pool":"default.rgw.meta:accounts","group_pool":"default.rgw.meta:groups","system_key":{"access_key":"","secret_key":""},"placement_pools":[{"key":"default-placement","val":{"index_pool":"default.rgw.buckets.index","storage_classes":{"FROZEN":{"data_pool":"default.rgw.buckets.data.frozen"},"LUKEWARM":{"data_pool":"default.rgw.buckets.data.lukewarm"},"STANDARD":{"data_pool":"default.rgw.buckets.data"}},"data_extra_pool":"default.rgw.buckets.non-ec","index_type":0,"inline_data":true}}],"realm_id":"","restore_pool":"default.rgw.log:restore"} 2026-04-01T09:53:35.138 DEBUG:tasks.util.rgw: json result: {'id': '0ece0ff1-62c2-4422-8d5e-5d6544aecfef', 'name': 'default', 'domain_root': 'default.rgw.meta:root', 'control_pool': 'default.rgw.control', 'dedup_pool': 'default.rgw.dedup', 'gc_pool': 'default.rgw.log:gc', 'lc_pool': 'default.rgw.log:lc', 'log_pool': 'default.rgw.log', 'intent_log_pool': 'default.rgw.log:intent', 'usage_log_pool': 'default.rgw.log:usage', 'roles_pool': 'default.rgw.meta:roles', 'reshard_pool': 'default.rgw.log:reshard', 'user_keys_pool': 'default.rgw.meta:users.keys', 'user_email_pool': 'default.rgw.meta:users.email', 'user_swift_pool': 'default.rgw.meta:users.swift', 'user_uid_pool': 'default.rgw.meta:users.uid', 'otp_pool': 'default.rgw.otp', 'notif_pool': 'default.rgw.log:notif', 'topics_pool': 'default.rgw.meta:topics', 'account_pool': 'default.rgw.meta:accounts', 'group_pool': 'default.rgw.meta:groups', 'system_key': {'access_key': '', 'secret_key': ''}, 'placement_pools': [{'key': 'default-placement', 'val': {'index_pool': 'default.rgw.buckets.index', 'storage_classes': {'FROZEN': {'data_pool': 'default.rgw.buckets.data.frozen'}, 'LUKEWARM': {'data_pool': 'default.rgw.buckets.data.lukewarm'}, 'STANDARD': {'data_pool': 'default.rgw.buckets.data'}}, 'data_extra_pool': 'default.rgw.buckets.non-ec', 'index_type': 0, 'inline_data': True}}], 'realm_id': '', 'restore_pool': 'default.rgw.log:restore'} 2026-04-01T09:53:35.138 INFO:tasks.rgw:Configuring storage class = LUKEWARM 2026-04-01T09:53:35.138 INFO:tasks.util.rgw:rgwadmin: client.2 : ['zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM'] 2026-04-01T09:53:35.139 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.2', '--cluster', 'ceph', 'zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM'] 2026-04-01T09:53:35.139 DEBUG:teuthology.orchestra.run.vm07:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.2 --cluster ceph zonegroup placement add --rgw-zone default --placement-id default-placement --storage-class LUKEWARM 2026-04-01T09:53:35.179 INFO:teuthology.orchestra.run.vm07.stderr:ignoring --setuser ceph since I am not root 2026-04-01T09:53:35.179 INFO:teuthology.orchestra.run.vm07.stderr:ignoring --setgroup ceph since I am not root 2026-04-01T09:53:35.193 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:35.192+0000 7fae30161900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-04-01T09:53:35.193 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:35.192+0000 7fae30161900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-04-01T09:53:35.193 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:35.193+0000 7fadd97ea640 20 reqs_thread_entry: start 2026-04-01T09:53:35.203 INFO:teuthology.orchestra.run.vm07.stdout:[{"key":"default-placement","val":{"name":"default-placement","tags":[],"storage_classes":["FROZEN","LUKEWARM","STANDARD"]}}] 2026-04-01T09:53:35.203 DEBUG:tasks.util.rgw: json result: [{'key': 'default-placement', 'val': {'name': 'default-placement', 'tags': [], 'storage_classes': ['FROZEN', 'LUKEWARM', 'STANDARD']}}] 2026-04-01T09:53:35.203 INFO:tasks.util.rgw:rgwadmin: client.2 : ['zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM', '--data-pool', 'default.rgw.buckets.data.lukewarm'] 2026-04-01T09:53:35.203 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.2', '--cluster', 'ceph', 'zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM', '--data-pool', 'default.rgw.buckets.data.lukewarm'] 2026-04-01T09:53:35.203 DEBUG:teuthology.orchestra.run.vm07:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.2 --cluster ceph zone placement add --rgw-zone default --placement-id default-placement --storage-class LUKEWARM --data-pool default.rgw.buckets.data.lukewarm 2026-04-01T09:53:35.288 INFO:teuthology.orchestra.run.vm07.stderr:ignoring --setuser ceph since I am not root 2026-04-01T09:53:35.288 INFO:teuthology.orchestra.run.vm07.stderr:ignoring --setgroup ceph since I am not root 2026-04-01T09:53:35.301 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:35.301+0000 7f990e76e900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-04-01T09:53:35.301 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:35.301+0000 7f990e76e900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-04-01T09:53:35.301 INFO:teuthology.orchestra.run.vm07.stderr:2026-04-01T09:53:35.301+0000 7f98b37fe640 20 reqs_thread_entry: start 2026-04-01T09:53:35.315 INFO:teuthology.orchestra.run.vm07.stdout:{"id":"0ece0ff1-62c2-4422-8d5e-5d6544aecfef","name":"default","domain_root":"default.rgw.meta:root","control_pool":"default.rgw.control","dedup_pool":"default.rgw.dedup","gc_pool":"default.rgw.log:gc","lc_pool":"default.rgw.log:lc","log_pool":"default.rgw.log","intent_log_pool":"default.rgw.log:intent","usage_log_pool":"default.rgw.log:usage","roles_pool":"default.rgw.meta:roles","reshard_pool":"default.rgw.log:reshard","user_keys_pool":"default.rgw.meta:users.keys","user_email_pool":"default.rgw.meta:users.email","user_swift_pool":"default.rgw.meta:users.swift","user_uid_pool":"default.rgw.meta:users.uid","otp_pool":"default.rgw.otp","notif_pool":"default.rgw.log:notif","topics_pool":"default.rgw.meta:topics","account_pool":"default.rgw.meta:accounts","group_pool":"default.rgw.meta:groups","system_key":{"access_key":"","secret_key":""},"placement_pools":[{"key":"default-placement","val":{"index_pool":"default.rgw.buckets.index","storage_classes":{"FROZEN":{"data_pool":"default.rgw.buckets.data.frozen"},"LUKEWARM":{"data_pool":"default.rgw.buckets.data.lukewarm"},"STANDARD":{"data_pool":"default.rgw.buckets.data"}},"data_extra_pool":"default.rgw.buckets.non-ec","index_type":0,"inline_data":true}}],"realm_id":"","restore_pool":"default.rgw.log:restore"} 2026-04-01T09:53:35.315 DEBUG:tasks.util.rgw: json result: {'id': '0ece0ff1-62c2-4422-8d5e-5d6544aecfef', 'name': 'default', 'domain_root': 'default.rgw.meta:root', 'control_pool': 'default.rgw.control', 'dedup_pool': 'default.rgw.dedup', 'gc_pool': 'default.rgw.log:gc', 'lc_pool': 'default.rgw.log:lc', 'log_pool': 'default.rgw.log', 'intent_log_pool': 'default.rgw.log:intent', 'usage_log_pool': 'default.rgw.log:usage', 'roles_pool': 'default.rgw.meta:roles', 'reshard_pool': 'default.rgw.log:reshard', 'user_keys_pool': 'default.rgw.meta:users.keys', 'user_email_pool': 'default.rgw.meta:users.email', 'user_swift_pool': 'default.rgw.meta:users.swift', 'user_uid_pool': 'default.rgw.meta:users.uid', 'otp_pool': 'default.rgw.otp', 'notif_pool': 'default.rgw.log:notif', 'topics_pool': 'default.rgw.meta:topics', 'account_pool': 'default.rgw.meta:accounts', 'group_pool': 'default.rgw.meta:groups', 'system_key': {'access_key': '', 'secret_key': ''}, 'placement_pools': [{'key': 'default-placement', 'val': {'index_pool': 'default.rgw.buckets.index', 'storage_classes': {'FROZEN': {'data_pool': 'default.rgw.buckets.data.frozen'}, 'LUKEWARM': {'data_pool': 'default.rgw.buckets.data.lukewarm'}, 'STANDARD': {'data_pool': 'default.rgw.buckets.data'}}, 'data_extra_pool': 'default.rgw.buckets.non-ec', 'index_type': 0, 'inline_data': True}}], 'realm_id': '', 'restore_pool': 'default.rgw.log:restore'} 2026-04-01T09:53:35.315 INFO:tasks.rgw:Starting rgw... 2026-04-01T09:53:35.315 INFO:tasks.rgw:rgw client.0 config is {} 2026-04-01T09:53:35.315 INFO:tasks.rgw:Using beast as radosgw frontend 2026-04-01T09:53:35.315 DEBUG:teuthology.orchestra.run.vm00:> sudo echo -n http://vm00.local:80 | sudo tee /home/ubuntu/cephtest/url_file 2026-04-01T09:53:35.342 INFO:teuthology.orchestra.run.vm00.stdout:http://vm00.local:80 2026-04-01T09:53:35.342 DEBUG:teuthology.orchestra.run.vm00:> sudo chown ceph /home/ubuntu/cephtest/url_file 2026-04-01T09:53:35.408 INFO:tasks.rgw.client.0:Restarting daemon 2026-04-01T09:53:35.408 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term radosgw --rgw-frontends 'beast port=80' -n client.0 --cluster ceph -k /etc/ceph/ceph.client.0.keyring --log-file /var/log/ceph/rgw.ceph.client.0.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.0.sock --foreground | sudo tee /var/log/ceph/rgw.ceph.client.0.stdout 2>&1 2026-04-01T09:53:35.450 INFO:tasks.rgw.client.0:Started 2026-04-01T09:53:35.450 INFO:tasks.rgw:rgw client.1 config is {} 2026-04-01T09:53:35.450 INFO:tasks.rgw:Using beast as radosgw frontend 2026-04-01T09:53:35.450 DEBUG:teuthology.orchestra.run.vm03:> sudo echo -n http://vm03.local:80 | sudo tee /home/ubuntu/cephtest/url_file 2026-04-01T09:53:35.478 INFO:teuthology.orchestra.run.vm03.stdout:http://vm03.local:80 2026-04-01T09:53:35.478 DEBUG:teuthology.orchestra.run.vm03:> sudo chown ceph /home/ubuntu/cephtest/url_file 2026-04-01T09:53:35.547 INFO:tasks.rgw.client.1:Restarting daemon 2026-04-01T09:53:35.547 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term radosgw --rgw-frontends 'beast port=80' -n client.1 --cluster ceph -k /etc/ceph/ceph.client.1.keyring --log-file /var/log/ceph/rgw.ceph.client.1.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.1.sock --foreground | sudo tee /var/log/ceph/rgw.ceph.client.1.stdout 2>&1 2026-04-01T09:53:35.589 INFO:tasks.rgw.client.1:Started 2026-04-01T09:53:35.589 INFO:tasks.rgw:rgw client.2 config is {} 2026-04-01T09:53:35.589 INFO:tasks.rgw:Using beast as radosgw frontend 2026-04-01T09:53:35.589 DEBUG:teuthology.orchestra.run.vm07:> sudo echo -n http://vm07.local:80 | sudo tee /home/ubuntu/cephtest/url_file 2026-04-01T09:53:35.624 INFO:teuthology.orchestra.run.vm07.stdout:http://vm07.local:80 2026-04-01T09:53:35.624 DEBUG:teuthology.orchestra.run.vm07:> sudo chown ceph /home/ubuntu/cephtest/url_file 2026-04-01T09:53:35.696 INFO:tasks.rgw.client.2:Restarting daemon 2026-04-01T09:53:35.696 DEBUG:teuthology.orchestra.run.vm07:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term radosgw --rgw-frontends 'beast port=80' -n client.2 --cluster ceph -k /etc/ceph/ceph.client.2.keyring --log-file /var/log/ceph/rgw.ceph.client.2.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.2.sock --foreground | sudo tee /var/log/ceph/rgw.ceph.client.2.stdout 2>&1 2026-04-01T09:53:35.738 INFO:tasks.rgw.client.2:Started 2026-04-01T09:53:35.738 INFO:tasks.rgw:Polling client.0 until it starts accepting connections on http://vm00.local:80/ 2026-04-01T09:53:35.738 DEBUG:teuthology.orchestra.run.vm00:> curl http://vm00.local:80/ 2026-04-01T09:53:35.785 INFO:teuthology.orchestra.run.vm00.stderr: % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-01T09:53:35.785 INFO:teuthology.orchestra.run.vm00.stderr: Dload Upload Total Spent Left Speed 2026-04-01T09:53:35.788 INFO:teuthology.orchestra.run.vm00.stderr: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 187 0 187 0 0 62333 0 --:--:-- --:--:-- --:--:-- 62333 2026-04-01T09:53:35.790 INFO:teuthology.orchestra.run.vm00.stdout:anonymous 2026-04-01T09:53:35.790 INFO:tasks.rgw:Polling client.1 until it starts accepting connections on http://vm03.local:80/ 2026-04-01T09:53:35.790 DEBUG:teuthology.orchestra.run.vm03:> curl http://vm03.local:80/ 2026-04-01T09:53:35.827 INFO:teuthology.orchestra.run.vm03.stderr: % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-01T09:53:35.827 INFO:teuthology.orchestra.run.vm03.stderr: Dload Upload Total Spent Left Speed 2026-04-01T09:53:35.828 INFO:teuthology.orchestra.run.vm03.stderr: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 2026-04-01T09:53:35.828 INFO:teuthology.orchestra.run.vm03.stderr:curl: (7) Failed to connect to vm03.local port 80: Connection refused 2026-04-01T09:53:35.829 DEBUG:teuthology.orchestra.run:got remote process result: 7 2026-04-01T09:53:36.830 DEBUG:teuthology.orchestra.run.vm03:> curl http://vm03.local:80/ 2026-04-01T09:53:36.850 INFO:teuthology.orchestra.run.vm03.stderr: % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-01T09:53:36.850 INFO:teuthology.orchestra.run.vm03.stderr: Dload Upload Total Spent Left Speed 2026-04-01T09:53:36.852 INFO:teuthology.orchestra.run.vm03.stderr: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 187 0 187 0 0 93500 0 --:--:-- --:--:-- --:--:-- 93500 2026-04-01T09:53:36.852 INFO:teuthology.orchestra.run.vm03.stdout:anonymous 2026-04-01T09:53:36.852 INFO:tasks.rgw:Polling client.2 until it starts accepting connections on http://vm07.local:80/ 2026-04-01T09:53:36.852 DEBUG:teuthology.orchestra.run.vm07:> curl http://vm07.local:80/ 2026-04-01T09:53:36.875 INFO:teuthology.orchestra.run.vm07.stderr: % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-01T09:53:36.875 INFO:teuthology.orchestra.run.vm07.stderr: Dload Upload Total Spent Left Speed 2026-04-01T09:53:36.877 INFO:teuthology.orchestra.run.vm07.stderr: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 187 0 187 0 0 182k 0 --:--:-- --:--:-- --:--:-- 182k 2026-04-01T09:53:36.877 INFO:teuthology.orchestra.run.vm07.stdout:anonymous 2026-04-01T09:53:36.877 INFO:teuthology.run_tasks:Running task tox... 2026-04-01T09:53:36.880 INFO:tasks.tox:Deploying tox from pip... 2026-04-01T09:53:36.880 DEBUG:teuthology.orchestra.run.vm00:> python3 -m venv /home/ubuntu/cephtest/tox-venv 2026-04-01T09:53:38.218 DEBUG:teuthology.orchestra.run.vm00:> source /home/ubuntu/cephtest/tox-venv/bin/activate && pip install tox 2026-04-01T09:53:38.522 INFO:teuthology.orchestra.run.vm00.stdout:Collecting tox 2026-04-01T09:53:38.550 INFO:teuthology.orchestra.run.vm00.stdout: Downloading tox-4.30.3-py3-none-any.whl (175 kB) 2026-04-01T09:53:38.604 INFO:teuthology.orchestra.run.vm00.stdout:Collecting platformdirs>=4.3.8 2026-04-01T09:53:38.614 INFO:teuthology.orchestra.run.vm00.stdout: Downloading platformdirs-4.4.0-py3-none-any.whl (18 kB) 2026-04-01T09:53:38.637 INFO:teuthology.orchestra.run.vm00.stdout:Collecting pluggy>=1.6 2026-04-01T09:53:38.645 INFO:teuthology.orchestra.run.vm00.stdout: Downloading pluggy-1.6.0-py3-none-any.whl (20 kB) 2026-04-01T09:53:38.697 INFO:teuthology.orchestra.run.vm00.stdout:Collecting chardet>=5.2 2026-04-01T09:53:38.705 INFO:teuthology.orchestra.run.vm00.stdout: Downloading chardet-5.2.0-py3-none-any.whl (199 kB) 2026-04-01T09:53:38.732 INFO:teuthology.orchestra.run.vm00.stdout:Collecting colorama>=0.4.6 2026-04-01T09:53:38.740 INFO:teuthology.orchestra.run.vm00.stdout: Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB) 2026-04-01T09:53:38.779 INFO:teuthology.orchestra.run.vm00.stdout:Collecting filelock>=3.18 2026-04-01T09:53:38.788 INFO:teuthology.orchestra.run.vm00.stdout: Downloading filelock-3.19.1-py3-none-any.whl (15 kB) 2026-04-01T09:53:38.844 INFO:teuthology.orchestra.run.vm00.stdout:Collecting tomli>=2.2.1 2026-04-01T09:53:38.853 INFO:teuthology.orchestra.run.vm00.stdout: Downloading tomli-2.4.1-py3-none-any.whl (14 kB) 2026-04-01T09:53:38.877 INFO:teuthology.orchestra.run.vm00.stdout:Collecting pyproject-api>=1.9.1 2026-04-01T09:53:38.888 INFO:teuthology.orchestra.run.vm00.stdout: Downloading pyproject_api-1.9.1-py3-none-any.whl (13 kB) 2026-04-01T09:53:38.976 INFO:teuthology.orchestra.run.vm00.stdout:Collecting virtualenv>=20.31.2 2026-04-01T09:53:38.985 INFO:teuthology.orchestra.run.vm00.stdout: Downloading virtualenv-21.2.0-py3-none-any.whl (5.8 MB) 2026-04-01T09:53:39.092 INFO:teuthology.orchestra.run.vm00.stdout:Collecting typing-extensions>=4.14.1 2026-04-01T09:53:39.100 INFO:teuthology.orchestra.run.vm00.stdout: Downloading typing_extensions-4.15.0-py3-none-any.whl (44 kB) 2026-04-01T09:53:39.130 INFO:teuthology.orchestra.run.vm00.stdout:Collecting packaging>=25 2026-04-01T09:53:39.138 INFO:teuthology.orchestra.run.vm00.stdout: Downloading packaging-26.0-py3-none-any.whl (74 kB) 2026-04-01T09:53:39.170 INFO:teuthology.orchestra.run.vm00.stdout:Collecting cachetools>=6.1 2026-04-01T09:53:39.179 INFO:teuthology.orchestra.run.vm00.stdout: Downloading cachetools-6.2.6-py3-none-any.whl (11 kB) 2026-04-01T09:53:39.219 INFO:teuthology.orchestra.run.vm00.stdout:Collecting python-discovery>=1 2026-04-01T09:53:39.228 INFO:teuthology.orchestra.run.vm00.stdout: Downloading python_discovery-1.2.1-py3-none-any.whl (31 kB) 2026-04-01T09:53:39.254 INFO:teuthology.orchestra.run.vm00.stdout:Collecting distlib<1,>=0.3.7 2026-04-01T09:53:39.263 INFO:teuthology.orchestra.run.vm00.stdout: Downloading distlib-0.4.0-py2.py3-none-any.whl (469 kB) 2026-04-01T09:53:39.334 INFO:teuthology.orchestra.run.vm00.stdout:Installing collected packages: platformdirs, filelock, typing-extensions, tomli, python-discovery, packaging, distlib, virtualenv, pyproject-api, pluggy, colorama, chardet, cachetools, tox 2026-04-01T09:53:39.698 INFO:teuthology.orchestra.run.vm00.stdout:Successfully installed cachetools-6.2.6 chardet-5.2.0 colorama-0.4.6 distlib-0.4.0 filelock-3.19.1 packaging-26.0 platformdirs-4.4.0 pluggy-1.6.0 pyproject-api-1.9.1 python-discovery-1.2.1 tomli-2.4.1 tox-4.30.3 typing-extensions-4.15.0 virtualenv-21.2.0 2026-04-01T09:53:39.781 INFO:teuthology.orchestra.run.vm00.stderr:WARNING: You are using pip version 21.3.1; however, version 26.0.1 is available. 2026-04-01T09:53:39.781 INFO:teuthology.orchestra.run.vm00.stderr:You should consider upgrading via the '/home/ubuntu/cephtest/tox-venv/bin/python3 -m pip install --upgrade pip' command. 2026-04-01T09:53:39.825 INFO:teuthology.run_tasks:Running task tox... 2026-04-01T09:53:39.827 INFO:tasks.tox:Deploying tox from pip... 2026-04-01T09:53:39.827 DEBUG:teuthology.orchestra.run.vm00:> python3 -m venv /home/ubuntu/cephtest/tox-venv 2026-04-01T09:53:40.546 DEBUG:teuthology.orchestra.run.vm00:> source /home/ubuntu/cephtest/tox-venv/bin/activate && pip install tox 2026-04-01T09:53:40.699 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: tox in ./cephtest/tox-venv/lib/python3.9/site-packages (4.30.3) 2026-04-01T09:53:40.704 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: typing-extensions>=4.14.1 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (4.15.0) 2026-04-01T09:53:40.704 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: virtualenv>=20.31.2 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (21.2.0) 2026-04-01T09:53:40.704 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: platformdirs>=4.3.8 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (4.4.0) 2026-04-01T09:53:40.704 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: pluggy>=1.6 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (1.6.0) 2026-04-01T09:53:40.705 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: tomli>=2.2.1 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (2.4.1) 2026-04-01T09:53:40.705 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: colorama>=0.4.6 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (0.4.6) 2026-04-01T09:53:40.705 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: filelock>=3.18 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (3.19.1) 2026-04-01T09:53:40.705 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: packaging>=25 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (26.0) 2026-04-01T09:53:40.706 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: chardet>=5.2 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (5.2.0) 2026-04-01T09:53:40.706 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: pyproject-api>=1.9.1 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (1.9.1) 2026-04-01T09:53:40.706 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: cachetools>=6.1 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (6.2.6) 2026-04-01T09:53:40.726 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: python-discovery>=1 in ./cephtest/tox-venv/lib/python3.9/site-packages (from virtualenv>=20.31.2->tox) (1.2.1) 2026-04-01T09:53:40.726 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: distlib<1,>=0.3.7 in ./cephtest/tox-venv/lib/python3.9/site-packages (from virtualenv>=20.31.2->tox) (0.4.0) 2026-04-01T09:53:40.743 INFO:teuthology.orchestra.run.vm00.stderr:WARNING: You are using pip version 21.3.1; however, version 26.0.1 is available. 2026-04-01T09:53:40.743 INFO:teuthology.orchestra.run.vm00.stderr:You should consider upgrading via the '/home/ubuntu/cephtest/tox-venv/bin/python3 -m pip install --upgrade pip' command. 2026-04-01T09:53:40.764 INFO:teuthology.run_tasks:Running task dedup-tests... 2026-04-01T09:53:40.767 DEBUG:tasks.dedup_tests:config is {'client.0': {'rgw_server': 'client.0'}} 2026-04-01T09:53:40.767 INFO:tasks.dedup_tests:Downloading dedup-tests... 2026-04-01T09:53:40.767 INFO:tasks.dedup_tests:Using branch tt-20.2.0-sse-s3-kmip-preview-not-for-production-1 from http://git.local/ceph.git for dedup tests 2026-04-01T09:53:40.767 DEBUG:teuthology.orchestra.run.vm00:> git clone -b tt-20.2.0-sse-s3-kmip-preview-not-for-production-1 http://git.local/ceph.git /home/ubuntu/cephtest/ceph 2026-04-01T09:53:40.786 INFO:teuthology.orchestra.run.vm00.stderr:Cloning into '/home/ubuntu/cephtest/ceph'... 2026-04-01T09:54:13.132 INFO:tasks.dedup_tests:Creating rgw user... 2026-04-01T09:54:13.132 DEBUG:tasks.dedup_tests:Creating user foo.client.0 on client.0 2026-04-01T09:54:13.132 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user create --uid foo.client.0 --display-name 'Mr. foo.client.0' --access-key BUJPFQDXXSWJYDQRGPFG --secret ujwtZtLMDLoPNZBPzBQw6mUQ2ZhqmuaogIk7wufXj2ELSWA60p5Uhg== --cluster ceph 2026-04-01T09:54:13.212 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-04-01T09:54:13.212 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-04-01T09:54:13.233 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.231+0000 7f07d392f900 20 rados->read ofs=0 len=0 2026-04-01T09:54:13.234 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.232+0000 7f07d392f900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:54:13.234 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.232+0000 7f07d392f900 20 realm 2026-04-01T09:54:13.234 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.232+0000 7f07d392f900 20 rados->read ofs=0 len=0 2026-04-01T09:54:13.234 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.233+0000 7f07d392f900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:54:13.234 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.233+0000 7f07d392f900 4 RGWPeriod::init failed to init realm id : (2) No such file or directory 2026-04-01T09:54:13.234 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.233+0000 7f07d392f900 20 rados->read ofs=0 len=0 2026-04-01T09:54:13.234 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.233+0000 7f07d392f900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:54:13.234 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.233+0000 7f07d392f900 20 rados->read ofs=0 len=0 2026-04-01T09:54:13.235 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.234+0000 7f07d392f900 20 rados_obj.operate() r=0 bl.length=46 2026-04-01T09:54:13.235 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.234+0000 7f07d392f900 20 rados->read ofs=0 len=0 2026-04-01T09:54:13.236 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.234+0000 7f07d392f900 20 rados_obj.operate() r=0 bl.length=1190 2026-04-01T09:54:13.236 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.234+0000 7f07d392f900 20 searching for the correct realm 2026-04-01T09:54:13.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.243+0000 7f07d392f900 20 RGWRados::pool_iterate: got zonegroup_info.9b133e09-d229-457d-a14c-135340d8b7fa 2026-04-01T09:54:13.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.243+0000 7f07d392f900 20 RGWRados::pool_iterate: got default.zonegroup. 2026-04-01T09:54:13.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.243+0000 7f07d392f900 20 RGWRados::pool_iterate: got zone_info.0ece0ff1-62c2-4422-8d5e-5d6544aecfef 2026-04-01T09:54:13.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.243+0000 7f07d392f900 20 RGWRados::pool_iterate: got default.zone. 2026-04-01T09:54:13.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.243+0000 7f07d392f900 20 RGWRados::pool_iterate: got zone_names.default 2026-04-01T09:54:13.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.243+0000 7f07d392f900 20 RGWRados::pool_iterate: got zonegroups_names.default 2026-04-01T09:54:13.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.243+0000 7f07d392f900 20 rados->read ofs=0 len=0 2026-04-01T09:54:13.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.244+0000 7f07d392f900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:54:13.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.244+0000 7f07d392f900 20 rados->read ofs=0 len=0 2026-04-01T09:54:13.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.244+0000 7f07d392f900 20 rados_obj.operate() r=0 bl.length=46 2026-04-01T09:54:13.245 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.244+0000 7f07d392f900 20 rados->read ofs=0 len=0 2026-04-01T09:54:13.246 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.244+0000 7f07d392f900 20 rados_obj.operate() r=0 bl.length=470 2026-04-01T09:54:13.246 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.244+0000 7f07d392f900 20 zone default found 2026-04-01T09:54:13.246 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.244+0000 7f07d392f900 4 Realm: () 2026-04-01T09:54:13.246 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.244+0000 7f07d392f900 4 ZoneGroup: default (9b133e09-d229-457d-a14c-135340d8b7fa) 2026-04-01T09:54:13.246 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.244+0000 7f07d392f900 4 Zone: default (0ece0ff1-62c2-4422-8d5e-5d6544aecfef) 2026-04-01T09:54:13.246 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.244+0000 7f07d392f900 10 cannot find current period zonegroup using local zonegroup configuration 2026-04-01T09:54:13.246 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.244+0000 7f07d392f900 20 zonegroup default 2026-04-01T09:54:13.246 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.245+0000 7f07d392f900 20 rados->read ofs=0 len=0 2026-04-01T09:54:13.246 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.245+0000 7f07d392f900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:54:13.246 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.245+0000 7f07d392f900 20 rados->read ofs=0 len=0 2026-04-01T09:54:13.246 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.245+0000 7f07d392f900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:54:13.247 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.245+0000 7f07d392f900 20 rados->read ofs=0 len=0 2026-04-01T09:54:13.247 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.245+0000 7f07d392f900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:54:13.247 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.245+0000 7f07d392f900 20 started sync module instance, tier type = 2026-04-01T09:54:13.247 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.245+0000 7f07d392f900 20 started zone id=0ece0ff1-62c2-4422-8d5e-5d6544aecfef (name=default) with tier type = 2026-04-01T09:54:13.251 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.249+0000 7f07d392f900 20 add_watcher() i=7 2026-04-01T09:54:13.251 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.250+0000 7f07d392f900 20 add_watcher() i=3 2026-04-01T09:54:13.251 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.250+0000 7f07d392f900 20 add_watcher() i=0 2026-04-01T09:54:13.252 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.250+0000 7f07d392f900 20 add_watcher() i=2 2026-04-01T09:54:13.252 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.250+0000 7f07d392f900 20 add_watcher() i=5 2026-04-01T09:54:13.252 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.250+0000 7f07d392f900 20 add_watcher() i=1 2026-04-01T09:54:13.252 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.250+0000 7f07d392f900 20 add_watcher() i=4 2026-04-01T09:54:13.252 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.250+0000 7f07d392f900 20 add_watcher() i=6 2026-04-01T09:54:13.252 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.250+0000 7f07d392f900 2 all 8 watchers are set, enabling cache 2026-04-01T09:54:13.254 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.253+0000 7f07d392f900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-04-01T09:54:13.254 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.253+0000 7f07d392f900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-04-01T09:54:13.254 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.253+0000 7f07d392f900 5 note: GC not initialized 2026-04-01T09:54:13.255 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.253+0000 7f07777fe640 20 reqs_thread_entry: start 2026-04-01T09:54:13.485 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.484+0000 7f07d392f900 20 init_complete bucket index max shards: 11 2026-04-01T09:54:13.485 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.484+0000 7f07d392f900 20 Filter name: none 2026-04-01T09:54:13.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.484+0000 7f07757fa640 20 reqs_thread_entry: start 2026-04-01T09:54:13.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.484+0000 7f07d392f900 10 cache get: name=default.rgw.meta+users.uid+foo.client.0 : miss 2026-04-01T09:54:13.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.484+0000 7f07d392f900 20 rados->read ofs=0 len=0 2026-04-01T09:54:13.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.484+0000 7f07d392f900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:54:13.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.484+0000 7f07d392f900 10 cache put: name=default.rgw.meta+users.uid+foo.client.0 info.flags=0x0 2026-04-01T09:54:13.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.484+0000 7f07d392f900 10 adding default.rgw.meta+users.uid+foo.client.0 to cache LRU end 2026-04-01T09:54:13.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.484+0000 7f07d392f900 10 cache get: name=default.rgw.meta+users.keys+BUJPFQDXXSWJYDQRGPFG : miss 2026-04-01T09:54:13.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.484+0000 7f07d392f900 20 rados->read ofs=0 len=0 2026-04-01T09:54:13.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.485+0000 7f07d392f900 20 rados_obj.operate() r=-2 bl.length=0 2026-04-01T09:54:13.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.485+0000 7f07d392f900 10 cache put: name=default.rgw.meta+users.keys+BUJPFQDXXSWJYDQRGPFG info.flags=0x0 2026-04-01T09:54:13.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.485+0000 7f07d392f900 10 adding default.rgw.meta+users.keys+BUJPFQDXXSWJYDQRGPFG to cache LRU end 2026-04-01T09:54:13.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.485+0000 7f07d392f900 10 cache get: name=default.rgw.meta+users.keys+BUJPFQDXXSWJYDQRGPFG : hit (negative entry) 2026-04-01T09:54:13.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.485+0000 7f07d392f900 10 cache get: name=default.rgw.meta+users.keys+BUJPFQDXXSWJYDQRGPFG : hit (negative entry) 2026-04-01T09:54:13.498 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.497+0000 7f07d392f900 10 cache put: name=default.rgw.meta+users.uid+foo.client.0 info.flags=0x17 2026-04-01T09:54:13.498 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.497+0000 7f07d392f900 10 moving default.rgw.meta+users.uid+foo.client.0 to cache LRU end 2026-04-01T09:54:13.498 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.497+0000 7f07d392f900 10 distributing notification oid=default.rgw.control:notify.0 cni=[op: 0, obj: default.rgw.meta:users.uid:foo.client.0, ofs0, ns] 2026-04-01T09:54:13.499 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.497+0000 7f07ae7fc640 10 rgw watcher librados: RGWWatcher::handle_notify() notify_id 163208757248 cookie 93895993096448 notifier 4694 bl.length()=628 2026-04-01T09:54:13.499 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.497+0000 7f07ae7fc640 10 rgw watcher librados: cache put: name=default.rgw.meta+users.uid+foo.client.0 info.flags=0x17 2026-04-01T09:54:13.499 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.497+0000 7f07ae7fc640 10 rgw watcher librados: moving default.rgw.meta+users.uid+foo.client.0 to cache LRU end 2026-04-01T09:54:13.509 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.508+0000 7f07d392f900 10 cache put: name=default.rgw.meta+users.keys+BUJPFQDXXSWJYDQRGPFG info.flags=0x7 2026-04-01T09:54:13.509 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.508+0000 7f07d392f900 10 moving default.rgw.meta+users.keys+BUJPFQDXXSWJYDQRGPFG to cache LRU end 2026-04-01T09:54:13.509 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.508+0000 7f07d392f900 10 distributing notification oid=default.rgw.control:notify.6 cni=[op: 0, obj: default.rgw.meta:users.keys:BUJPFQDXXSWJYDQRGPFG, ofs0, ns] 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.508+0000 7f07adffb640 10 rgw watcher librados: RGWWatcher::handle_notify() notify_id 163208757248 cookie 93895994832080 notifier 4694 bl.length()=186 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.508+0000 7f07adffb640 10 rgw watcher librados: cache put: name=default.rgw.meta+users.keys+BUJPFQDXXSWJYDQRGPFG info.flags=0x7 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.508+0000 7f07adffb640 10 rgw watcher librados: moving default.rgw.meta+users.keys+BUJPFQDXXSWJYDQRGPFG to cache LRU end 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout: "user_id": "foo.client.0", 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout: "display_name": "Mr. foo.client.0", 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout: "email": "", 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout: "suspended": 0, 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout: "max_buckets": 1000, 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout: "subusers": [], 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout: "keys": [ 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout: { 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout: "user": "foo.client.0", 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout: "access_key": "BUJPFQDXXSWJYDQRGPFG", 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout: "secret_key": "ujwtZtLMDLoPNZBPzBQw6mUQ2ZhqmuaogIk7wufXj2ELSWA60p5Uhg==", 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout: "active": true, 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout: "create_date": "2026-04-01T09:54:13.486656Z" 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout: ], 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout: "swift_keys": [], 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout: "caps": [], 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout: "op_mask": "read, write, delete", 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout: "default_placement": "", 2026-04-01T09:54:13.510 INFO:teuthology.orchestra.run.vm00.stdout: "default_storage_class": "", 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "placement_tags": [], 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "bucket_quota": { 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "enabled": false, 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "check_on_raw": false, 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "max_size": -1, 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "max_size_kb": 0, 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "max_objects": -1 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "user_quota": { 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "enabled": false, 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "check_on_raw": false, 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "max_size": -1, 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "max_size_kb": 0, 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "max_objects": -1 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "temp_url_keys": [], 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "type": "rgw", 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "mfa_ids": [], 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "account_id": "", 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "path": "/", 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "create_date": "2026-04-01T09:54:13.486651Z", 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "tags": [], 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: "group_ids": [] 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-04-01T09:54:13.511 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:54:13.513 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.512+0000 7f07d392f900 20 remove_watcher() i=0 2026-04-01T09:54:13.514 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.512+0000 7f07d392f900 2 removed watcher, disabling cache 2026-04-01T09:54:13.514 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.513+0000 7f07d392f900 20 remove_watcher() i=4 2026-04-01T09:54:13.514 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.513+0000 7f07d392f900 20 remove_watcher() i=3 2026-04-01T09:54:13.514 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.513+0000 7f07d392f900 20 remove_watcher() i=7 2026-04-01T09:54:13.514 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.513+0000 7f07d392f900 20 remove_watcher() i=5 2026-04-01T09:54:13.514 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.513+0000 7f07d392f900 20 remove_watcher() i=2 2026-04-01T09:54:13.514 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.513+0000 7f07d392f900 20 remove_watcher() i=1 2026-04-01T09:54:13.514 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T09:54:13.513+0000 7f07d392f900 20 remove_watcher() i=6 2026-04-01T09:54:13.520 INFO:tasks.dedup_tests:Configuring dedup-tests... 2026-04-01T09:54:13.520 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-04-01T09:54:13.521 DEBUG:teuthology.orchestra.run.vm00:> dd of=/home/ubuntu/cephtest/ceph/src/test/rgw/dedup/deduptests.client.0.conf 2026-04-01T09:54:13.578 INFO:tasks.dedup_tests:Running dedup-tests... 2026-04-01T09:54:13.578 DEBUG:teuthology.orchestra.run.vm00:dedup tests against rgw> source /home/ubuntu/cephtest/tox-venv/bin/activate && cd /home/ubuntu/cephtest/ceph/src/test/rgw/dedup/ && DEDUPTESTS_CONF=./deduptests.client.0.conf tox -- -v -m 'basic_test or request_test or example_test' 2026-04-01T09:54:13.934 INFO:teuthology.orchestra.run.vm00.stdout:py: install_deps> python -I -m pip install -r requirements.txt 2026-04-01T09:54:16.772 INFO:teuthology.orchestra.run.vm00.stdout:py: commands[0]> pytest -v -m 'basic_test or request_test or example_test' 2026-04-01T09:54:16.862 INFO:teuthology.orchestra.run.vm00.stdout:============================= test session starts ============================== 2026-04-01T09:54:16.862 INFO:teuthology.orchestra.run.vm00.stdout:platform linux -- Python 3.9.23, pytest-8.4.2, pluggy-1.6.0 -- /home/ubuntu/cephtest/ceph/src/test/rgw/dedup/.tox/py/bin/python 2026-04-01T09:54:16.862 INFO:teuthology.orchestra.run.vm00.stdout:cachedir: .tox/py/.pytest_cache 2026-04-01T09:54:16.862 INFO:teuthology.orchestra.run.vm00.stdout:rootdir: /home/ubuntu/cephtest/ceph/src/test/rgw/dedup 2026-04-01T09:54:16.862 INFO:teuthology.orchestra.run.vm00.stdout:configfile: pytest.ini 2026-04-01T09:54:16.957 INFO:teuthology.orchestra.run.vm00.stdout:collecting ... collected 34 items 2026-04-01T09:54:16.957 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T09:54:17.075 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_etag_corruption PASSED [ 2%] 2026-04-01T09:54:17.075 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_md5_collisions PASSED [ 5%] 2026-04-01T09:54:17.075 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_small PASSED [ 8%] 2026-04-01T09:54:17.076 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_small_with_tenants PASSED [ 11%] 2026-04-01T09:54:17.076 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_0_with_tenants PASSED [ 14%] 2026-04-01T09:54:17.076 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_0 PASSED [ 17%] 2026-04-01T09:54:17.077 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_1_with_tenants PASSED [ 20%] 2026-04-01T09:54:17.077 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_1 PASSED [ 23%] 2026-04-01T09:54:17.077 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_2_with_tenants PASSED [ 26%] 2026-04-01T09:54:17.078 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_2 PASSED [ 29%] 2026-04-01T09:54:17.078 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_with_remove_multi_tenants PASSED [ 32%] 2026-04-01T09:54:17.078 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_with_remove PASSED [ 35%] 2026-04-01T09:54:17.079 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_multipart_with_tenants PASSED [ 38%] 2026-04-01T09:54:17.079 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_multipart PASSED [ 41%] 2026-04-01T09:54:17.079 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_basic_with_tenants PASSED [ 44%] 2026-04-01T09:54:17.080 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_basic PASSED [ 47%] 2026-04-01T09:54:17.080 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_small_multipart_with_tenants PASSED [ 50%] 2026-04-01T09:54:17.080 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_small_multipart PASSED [ 52%] 2026-04-01T09:54:17.081 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_large_scale_with_tenants PASSED [ 55%] 2026-04-01T09:54:17.081 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_large_scale PASSED [ 58%] 2026-04-01T09:54:17.081 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_empty_bucket PASSED [ 61%] 2026-04-01T09:54:17.082 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_loop_with_tenants PASSED [ 64%] 2026-04-01T09:54:24.083 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small_with_tenants 2026-04-01T09:54:24.083 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-04-01T09:54:24.083 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-04-01T09:54:24.697 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 67%] 2026-04-01T09:57:09.228 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_multipart 2026-04-01T09:57:09.228 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-04-01T09:57:09.228 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-04-01T09:57:14.497 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 70%] 2026-04-01T09:57:25.020 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_basic 2026-04-01T09:57:25.021 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-04-01T09:57:25.021 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-04-01T09:57:25.679 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 73%] 2026-04-01T09:57:36.048 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small_multipart 2026-04-01T09:57:36.048 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-04-01T09:57:36.048 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-04-01T09:57:36.599 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 76%] 2026-04-01T09:57:42.420 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small 2026-04-01T09:57:42.420 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-04-01T09:57:42.420 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-04-01T09:57:42.888 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 79%] 2026-04-01T09:57:56.826 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small_large_mix 2026-04-01T09:57:56.827 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-04-01T09:57:56.827 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-04-01T09:57:57.887 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 82%] 2026-04-01T09:58:18.228 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_basic_with_tenants 2026-04-01T09:58:18.228 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-04-01T09:58:18.228 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-04-01T09:58:19.144 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 85%] 2026-04-01T09:59:31.432 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_multipart_with_tenants 2026-04-01T09:59:31.432 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-04-01T09:59:31.432 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-04-01T09:59:33.799 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 88%] 2026-04-01T09:59:43.155 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small_multipart_with_tenants 2026-04-01T09:59:43.155 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-04-01T09:59:43.155 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-04-01T09:59:43.827 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 91%] 2026-04-01T10:04:47.974 INFO:tasks.ceph.mon.a.vm00.stderr:2026-04-01T10:04:47.973+0000 7f1657841640 -1 log_channel(cluster) log [ERR] : Health check failed: mon c is very low on available space (MON_DISK_CRIT) 2026-04-01T10:04:53.197 INFO:tasks.ceph.mon.a.vm00.stderr:2026-04-01T10:04:53.195+0000 7f165a046640 -1 log_channel(cluster) log [ERR] : Health check update: mons a,c are very low on available space (MON_DISK_CRIT) 2026-04-01T10:07:17.217 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_large_scale_with_tenants 2026-04-01T10:07:17.217 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-04-01T10:07:17.217 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-04-01T10:07:17.218 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1288 [64] obj_count=65601, upload=436(sec), exec=5(sec), verify=0(sec) 2026-04-01T10:07:24.418 INFO:tasks.ceph.mon.a.vm00.stderr:2026-04-01T10:07:24.416+0000 7f165a046640 -1 log_channel(cluster) log [ERR] : Health check update: mons a,b,c are very low on available space (MON_DISK_CRIT) 2026-04-01T10:09:07.583 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 94%] 2026-04-01T10:09:28.194 INFO:tasks.ceph.osd.3.vm00.stderr:problem writing to /var/log/ceph/ceph-osd.3.log: (28) No space left on device 2026-04-01T10:09:28.194 INFO:tasks.ceph.osd.3.vm00.stderr:problem writing to /var/log/ceph/ceph-osd.3.log: (28) No space left on device 2026-04-01T10:09:28.194 INFO:tasks.ceph.osd.0.vm00.stderr:problem writing to /var/log/ceph/ceph-osd.0.log: (28) No space left on device 2026-04-01T10:09:28.194 INFO:tasks.ceph.osd.1.vm00.stderr:problem writing to /var/log/ceph/ceph-osd.1.log: (28) No space left on device 2026-04-01T10:09:28.194 INFO:tasks.ceph.osd.2.vm00.stderr:problem writing to /var/log/ceph/ceph-osd.2.log: (28) No space left on device 2026-04-01T10:09:28.194 INFO:tasks.rgw.client.0.vm00.stdout:problem writing to /var/log/ceph/rgw.ceph.client.0.log: (28) No space left on device 2026-04-01T10:09:28.196 INFO:tasks.rgw.client.0.vm00.stdout:tee: /var/log/ceph/rgw.ceph.client.0.stdout: No space left on device 2026-04-01T10:09:28.780 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:28.813 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:09:28.895 INFO:tasks.ceph.mgr.y.vm00.stderr:problem writing to /var/log/ceph/ceph-mgr.y.log: (28) No space left on device 2026-04-01T10:09:58.953 INFO:tasks.ceph.mon.a.vm00.stderr:2026-04-01T10:09:58.952+0000 7f165a046640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-a/store.db/000022.log: No space left on device code =  Rocksdb transaction: 2026-04-01T10:09:58.953 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = '1152' value size = 13501) 2026-04-01T10:09:58.953 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = 'pending_v' value size = 8) 2026-04-01T10:09:58.953 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = 'pending_pn' value size = 8) 2026-04-01T10:09:58.961 INFO:tasks.ceph.mon.a.vm00.stderr:/runner/scratch/rpms/ceph-debug/20.2.0-8-g0597158282e/BUILD/ceph-20.2.0-8-g0597158282e/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f165a046640 time 2026-04-01T10:09:58.954688+0000 2026-04-01T10:09:58.962 INFO:tasks.ceph.mon.a.vm00.stderr:/runner/scratch/rpms/ceph-debug/20.2.0-8-g0597158282e/BUILD/ceph-20.2.0-8-g0597158282e/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-04-01T10:09:58.964 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-8-g0597158282e (0597158282e6d69429e60df2354a6c8eed0e5bce) tentacle (stable - RelWithDebInfo) 2026-04-01T10:09:58.964 INFO:tasks.ceph.mon.a.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f165fd901fd] 2026-04-01T10:09:58.964 INFO:tasks.ceph.mon.a.vm00.stderr: 2: ceph-mon(+0x2a61ac) [0x561019a591ac] 2026-04-01T10:09:58.964 INFO:tasks.ceph.mon.a.vm00.stderr: 3: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x561019bd795c] 2026-04-01T10:09:58.964 INFO:tasks.ceph.mon.a.vm00.stderr: 4: (Paxos::propose_pending()+0x11b) [0x561019be570b] 2026-04-01T10:09:58.964 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (Paxos::trigger_propose()+0x118) [0x561019be5b08] 2026-04-01T10:09:58.964 INFO:tasks.ceph.mon.a.vm00.stderr: 6: (PaxosService::propose_pending()+0x176) [0x561019be5e46] 2026-04-01T10:09:58.964 INFO:tasks.ceph.mon.a.vm00.stderr: 7: ceph-mon(+0x2a644d) [0x561019a5944d] 2026-04-01T10:09:58.964 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (CommonSafeTimer::timer_thread()+0x130) [0x7f165fedc550] 2026-04-01T10:09:58.964 INFO:tasks.ceph.mon.a.vm00.stderr: 9: /usr/lib64/ceph/libceph-common.so.2(+0x2dcfb1) [0x7f165fedcfb1] 2026-04-01T10:09:58.964 INFO:tasks.ceph.mon.a.vm00.stderr: 10: /lib64/libc.so.6(+0x8b2ea) [0x7f165ee8b2ea] 2026-04-01T10:09:58.964 INFO:tasks.ceph.mon.a.vm00.stderr: 11: /lib64/libc.so.6(+0x1103c0) [0x7f165ef103c0] 2026-04-01T10:09:58.964 INFO:tasks.ceph.mon.a.vm00.stderr:2026-04-01T10:09:58.963+0000 7f165a046640 -1 /runner/scratch/rpms/ceph-debug/20.2.0-8-g0597158282e/BUILD/ceph-20.2.0-8-g0597158282e/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f165a046640 time 2026-04-01T10:09:58.954688+0000 2026-04-01T10:09:58.964 INFO:tasks.ceph.mon.a.vm00.stderr:/runner/scratch/rpms/ceph-debug/20.2.0-8-g0597158282e/BUILD/ceph-20.2.0-8-g0597158282e/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-04-01T10:09:58.964 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-04-01T10:09:58.964 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-8-g0597158282e (0597158282e6d69429e60df2354a6c8eed0e5bce) tentacle (stable - RelWithDebInfo) 2026-04-01T10:09:58.964 INFO:tasks.ceph.mon.a.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f165fd901fd] 2026-04-01T10:09:58.964 INFO:tasks.ceph.mon.a.vm00.stderr: 2: ceph-mon(+0x2a61ac) [0x561019a591ac] 2026-04-01T10:09:58.964 INFO:tasks.ceph.mon.a.vm00.stderr: 3: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x561019bd795c] 2026-04-01T10:09:58.964 INFO:tasks.ceph.mon.a.vm00.stderr: 4: (Paxos::propose_pending()+0x11b) [0x561019be570b] 2026-04-01T10:09:58.965 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (Paxos::trigger_propose()+0x118) [0x561019be5b08] 2026-04-01T10:09:58.965 INFO:tasks.ceph.mon.a.vm00.stderr: 6: (PaxosService::propose_pending()+0x176) [0x561019be5e46] 2026-04-01T10:09:58.965 INFO:tasks.ceph.mon.a.vm00.stderr: 7: ceph-mon(+0x2a644d) [0x561019a5944d] 2026-04-01T10:09:58.965 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (CommonSafeTimer::timer_thread()+0x130) [0x7f165fedc550] 2026-04-01T10:09:58.965 INFO:tasks.ceph.mon.a.vm00.stderr: 9: /usr/lib64/ceph/libceph-common.so.2(+0x2dcfb1) [0x7f165fedcfb1] 2026-04-01T10:09:58.965 INFO:tasks.ceph.mon.a.vm00.stderr: 10: /lib64/libc.so.6(+0x8b2ea) [0x7f165ee8b2ea] 2026-04-01T10:09:58.965 INFO:tasks.ceph.mon.a.vm00.stderr: 11: /lib64/libc.so.6(+0x1103c0) [0x7f165ef103c0] 2026-04-01T10:09:58.965 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-04-01T10:09:58.965 INFO:tasks.ceph.mon.a.vm00.stderr:*** Caught signal (Aborted) ** 2026-04-01T10:09:58.965 INFO:tasks.ceph.mon.a.vm00.stderr: in thread 7f165a046640 thread_name:safe_timer 2026-04-01T10:09:58.968 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-8-g0597158282e (0597158282e6d69429e60df2354a6c8eed0e5bce) tentacle (stable - RelWithDebInfo) 2026-04-01T10:09:58.968 INFO:tasks.ceph.mon.a.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f165ee3fc30] 2026-04-01T10:09:58.968 INFO:tasks.ceph.mon.a.vm00.stderr: 2: /lib64/libc.so.6(+0x8d02c) [0x7f165ee8d02c] 2026-04-01T10:09:58.968 INFO:tasks.ceph.mon.a.vm00.stderr: 3: raise() 2026-04-01T10:09:58.968 INFO:tasks.ceph.mon.a.vm00.stderr: 4: abort() 2026-04-01T10:09:58.968 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f165fd902ba] 2026-04-01T10:09:58.968 INFO:tasks.ceph.mon.a.vm00.stderr: 6: ceph-mon(+0x2a61ac) [0x561019a591ac] 2026-04-01T10:09:58.968 INFO:tasks.ceph.mon.a.vm00.stderr: 7: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x561019bd795c] 2026-04-01T10:09:58.968 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (Paxos::propose_pending()+0x11b) [0x561019be570b] 2026-04-01T10:09:58.968 INFO:tasks.ceph.mon.a.vm00.stderr: 9: (Paxos::trigger_propose()+0x118) [0x561019be5b08] 2026-04-01T10:09:58.968 INFO:tasks.ceph.mon.a.vm00.stderr: 10: (PaxosService::propose_pending()+0x176) [0x561019be5e46] 2026-04-01T10:09:58.968 INFO:tasks.ceph.mon.a.vm00.stderr: 11: ceph-mon(+0x2a644d) [0x561019a5944d] 2026-04-01T10:09:58.968 INFO:tasks.ceph.mon.a.vm00.stderr: 12: (CommonSafeTimer::timer_thread()+0x130) [0x7f165fedc550] 2026-04-01T10:09:58.968 INFO:tasks.ceph.mon.a.vm00.stderr: 13: /usr/lib64/ceph/libceph-common.so.2(+0x2dcfb1) [0x7f165fedcfb1] 2026-04-01T10:09:58.968 INFO:tasks.ceph.mon.a.vm00.stderr: 14: /lib64/libc.so.6(+0x8b2ea) [0x7f165ee8b2ea] 2026-04-01T10:09:58.968 INFO:tasks.ceph.mon.a.vm00.stderr: 15: /lib64/libc.so.6(+0x1103c0) [0x7f165ef103c0] 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr:2026-04-01T10:09:58.967+0000 7f165a046640 -1 *** Caught signal (Aborted) ** 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr: in thread 7f165a046640 thread_name:safe_timer 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-8-g0597158282e (0597158282e6d69429e60df2354a6c8eed0e5bce) tentacle (stable - RelWithDebInfo) 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f165ee3fc30] 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr: 2: /lib64/libc.so.6(+0x8d02c) [0x7f165ee8d02c] 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr: 3: raise() 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr: 4: abort() 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f165fd902ba] 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr: 6: ceph-mon(+0x2a61ac) [0x561019a591ac] 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr: 7: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x561019bd795c] 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (Paxos::propose_pending()+0x11b) [0x561019be570b] 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr: 9: (Paxos::trigger_propose()+0x118) [0x561019be5b08] 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr: 10: (PaxosService::propose_pending()+0x176) [0x561019be5e46] 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr: 11: ceph-mon(+0x2a644d) [0x561019a5944d] 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr: 12: (CommonSafeTimer::timer_thread()+0x130) [0x7f165fedc550] 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr: 13: /usr/lib64/ceph/libceph-common.so.2(+0x2dcfb1) [0x7f165fedcfb1] 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr: 14: /lib64/libc.so.6(+0x8b2ea) [0x7f165ee8b2ea] 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr: 15: /lib64/libc.so.6(+0x1103c0) [0x7f165ef103c0] 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-04-01T10:09:58.969 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-04-01T10:09:58.970 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: -2> 2026-04-01T10:09:58.952+0000 7f165a046640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-a/store.db/000022.log: No space left on device code =  Rocksdb transaction: 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = '1152' value size = 13501) 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = 'pending_v' value size = 8) 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = 'pending_pn' value size = 8) 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: -1> 2026-04-01T10:09:58.963+0000 7f165a046640 -1 /runner/scratch/rpms/ceph-debug/20.2.0-8-g0597158282e/BUILD/ceph-20.2.0-8-g0597158282e/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f165a046640 time 2026-04-01T10:09:58.954688+0000 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr:/runner/scratch/rpms/ceph-debug/20.2.0-8-g0597158282e/BUILD/ceph-20.2.0-8-g0597158282e/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-8-g0597158282e (0597158282e6d69429e60df2354a6c8eed0e5bce) tentacle (stable - RelWithDebInfo) 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f165fd901fd] 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 2: ceph-mon(+0x2a61ac) [0x561019a591ac] 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 3: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x561019bd795c] 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 4: (Paxos::propose_pending()+0x11b) [0x561019be570b] 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (Paxos::trigger_propose()+0x118) [0x561019be5b08] 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 6: (PaxosService::propose_pending()+0x176) [0x561019be5e46] 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 7: ceph-mon(+0x2a644d) [0x561019a5944d] 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (CommonSafeTimer::timer_thread()+0x130) [0x7f165fedc550] 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 9: /usr/lib64/ceph/libceph-common.so.2(+0x2dcfb1) [0x7f165fedcfb1] 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 10: /lib64/libc.so.6(+0x8b2ea) [0x7f165ee8b2ea] 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 11: /lib64/libc.so.6(+0x1103c0) [0x7f165ef103c0] 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 0> 2026-04-01T10:09:58.967+0000 7f165a046640 -1 *** Caught signal (Aborted) ** 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: in thread 7f165a046640 thread_name:safe_timer 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-8-g0597158282e (0597158282e6d69429e60df2354a6c8eed0e5bce) tentacle (stable - RelWithDebInfo) 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f165ee3fc30] 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 2: /lib64/libc.so.6(+0x8d02c) [0x7f165ee8d02c] 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 3: raise() 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 4: abort() 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f165fd902ba] 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 6: ceph-mon(+0x2a61ac) [0x561019a591ac] 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 7: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x561019bd795c] 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (Paxos::propose_pending()+0x11b) [0x561019be570b] 2026-04-01T10:09:58.988 INFO:tasks.ceph.mon.a.vm00.stderr: 9: (Paxos::trigger_propose()+0x118) [0x561019be5b08] 2026-04-01T10:09:58.989 INFO:tasks.ceph.mon.a.vm00.stderr: 10: (PaxosService::propose_pending()+0x176) [0x561019be5e46] 2026-04-01T10:09:58.989 INFO:tasks.ceph.mon.a.vm00.stderr: 11: ceph-mon(+0x2a644d) [0x561019a5944d] 2026-04-01T10:09:58.989 INFO:tasks.ceph.mon.a.vm00.stderr: 12: (CommonSafeTimer::timer_thread()+0x130) [0x7f165fedc550] 2026-04-01T10:09:58.989 INFO:tasks.ceph.mon.a.vm00.stderr: 13: /usr/lib64/ceph/libceph-common.so.2(+0x2dcfb1) [0x7f165fedcfb1] 2026-04-01T10:09:58.989 INFO:tasks.ceph.mon.a.vm00.stderr: 14: /lib64/libc.so.6(+0x8b2ea) [0x7f165ee8b2ea] 2026-04-01T10:09:58.989 INFO:tasks.ceph.mon.a.vm00.stderr: 15: /lib64/libc.so.6(+0x1103c0) [0x7f165ef103c0] 2026-04-01T10:09:58.989 INFO:tasks.ceph.mon.a.vm00.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-04-01T10:09:58.989 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-04-01T10:09:58.989 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.989 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.989 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.992 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.993 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.994 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.998 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.998 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.998 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.998 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.998 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.998 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.998 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.999 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.999 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.999 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.999 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.999 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.999 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.999 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.999 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.999 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.999 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.999 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.999 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.999 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.999 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.999 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.999 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:58.999 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:59.000 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:59.000 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:59.000 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:59.000 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:59.000 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:59.001 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:59.001 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:59.001 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-04-01T10:09:59.002 INFO:tasks.ceph.mon.a.vm00.stderr: -9999> 2026-04-01T10:09:58.952+0000 7f165a046640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-a/store.db/000022.log: No space left on device code =  Rocksdb transaction: 2026-04-01T10:09:59.002 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = '1152' value size = 13501) 2026-04-01T10:09:59.002 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = 'pending_v' value size = 8) 2026-04-01T10:09:59.002 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = 'pending_pn' value size = 8) 2026-04-01T10:09:59.002 INFO:tasks.ceph.mon.a.vm00.stderr: -9998> 2026-04-01T10:09:58.963+0000 7f165a046640 -1 /runner/scratch/rpms/ceph-debug/20.2.0-8-g0597158282e/BUILD/ceph-20.2.0-8-g0597158282e/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f165a046640 time 2026-04-01T10:09:58.954688+0000 2026-04-01T10:09:59.002 INFO:tasks.ceph.mon.a.vm00.stderr:/runner/scratch/rpms/ceph-debug/20.2.0-8-g0597158282e/BUILD/ceph-20.2.0-8-g0597158282e/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-04-01T10:09:59.002 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-04-01T10:09:59.002 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-8-g0597158282e (0597158282e6d69429e60df2354a6c8eed0e5bce) tentacle (stable - RelWithDebInfo) 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f165fd901fd] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 2: ceph-mon(+0x2a61ac) [0x561019a591ac] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 3: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x561019bd795c] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 4: (Paxos::propose_pending()+0x11b) [0x561019be570b] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (Paxos::trigger_propose()+0x118) [0x561019be5b08] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 6: (PaxosService::propose_pending()+0x176) [0x561019be5e46] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 7: ceph-mon(+0x2a644d) [0x561019a5944d] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (CommonSafeTimer::timer_thread()+0x130) [0x7f165fedc550] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 9: /usr/lib64/ceph/libceph-common.so.2(+0x2dcfb1) [0x7f165fedcfb1] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 10: /lib64/libc.so.6(+0x8b2ea) [0x7f165ee8b2ea] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 11: /lib64/libc.so.6(+0x1103c0) [0x7f165ef103c0] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: -9997> 2026-04-01T10:09:58.967+0000 7f165a046640 -1 *** Caught signal (Aborted) ** 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: in thread 7f165a046640 thread_name:safe_timer 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-8-g0597158282e (0597158282e6d69429e60df2354a6c8eed0e5bce) tentacle (stable - RelWithDebInfo) 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f165ee3fc30] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 2: /lib64/libc.so.6(+0x8d02c) [0x7f165ee8d02c] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 3: raise() 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 4: abort() 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f165fd902ba] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 6: ceph-mon(+0x2a61ac) [0x561019a591ac] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 7: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x561019bd795c] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (Paxos::propose_pending()+0x11b) [0x561019be570b] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 9: (Paxos::trigger_propose()+0x118) [0x561019be5b08] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 10: (PaxosService::propose_pending()+0x176) [0x561019be5e46] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 11: ceph-mon(+0x2a644d) [0x561019a5944d] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 12: (CommonSafeTimer::timer_thread()+0x130) [0x7f165fedc550] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 13: /usr/lib64/ceph/libceph-common.so.2(+0x2dcfb1) [0x7f165fedcfb1] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 14: /lib64/libc.so.6(+0x8b2ea) [0x7f165ee8b2ea] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 15: /lib64/libc.so.6(+0x1103c0) [0x7f165ef103c0] 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-04-01T10:09:59.003 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-04-01T10:09:59.052 INFO:tasks.ceph.mon.a.vm00.stderr:daemon-helper: command crashed with signal 6 2026-04-01T10:10:00.996 INFO:tasks.ceph.mon.c.vm00.stderr:2026-04-01T10:10:00.994+0000 7f00b9edd640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-c/store.db/000022.log: No space left on device code =  Rocksdb transaction: 2026-04-01T10:10:00.996 INFO:tasks.ceph.mon.c.vm00.stderr:PutCF( prefix = monitor key = 'connectivity_scores' value size = 238) 2026-04-01T10:10:00.998 INFO:tasks.ceph.mon.c.vm00.stderr:/runner/scratch/rpms/ceph-debug/20.2.0-8-g0597158282e/BUILD/ceph-20.2.0-8-g0597158282e/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f00b9edd640 time 2026-04-01T10:10:00.996429+0000 2026-04-01T10:10:00.998 INFO:tasks.ceph.mon.c.vm00.stderr:/runner/scratch/rpms/ceph-debug/20.2.0-8-g0597158282e/BUILD/ceph-20.2.0-8-g0597158282e/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-04-01T10:10:00.998 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-8-g0597158282e (0597158282e6d69429e60df2354a6c8eed0e5bce) tentacle (stable - RelWithDebInfo) 2026-04-01T10:10:00.998 INFO:tasks.ceph.mon.c.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f00c25901fd] 2026-04-01T10:10:00.998 INFO:tasks.ceph.mon.c.vm00.stderr: 2: ceph-mon(+0x2a61ac) [0x556bac4341ac] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 3: (Elector::persist_connectivity_scores()+0x135) [0x556bac517865] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 4: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x556bac520901] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x556bac51ccd0] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 6: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x556bac51d887] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x556bac48ce3d] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 8: (Monitor::_ms_dispatch(Message*)+0x786) [0x556bac4815c6] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 9: ceph-mon(+0x2b333c) [0x556bac44133c] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 10: (DispatchQueue::entry()+0x4a8) [0x7f00c2806848] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 11: /usr/lib64/ceph/libceph-common.so.2(+0x49ac51) [0x7f00c289ac51] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 12: /lib64/libc.so.6(+0x8b2ea) [0x7f00c168b2ea] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 13: /lib64/libc.so.6(+0x1103c0) [0x7f00c17103c0] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr:*** Caught signal (Aborted) ** 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: in thread 7f00b9edd640 thread_name:ms_dispatch 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr:2026-04-01T10:10:00.997+0000 7f00b9edd640 -1 /runner/scratch/rpms/ceph-debug/20.2.0-8-g0597158282e/BUILD/ceph-20.2.0-8-g0597158282e/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f00b9edd640 time 2026-04-01T10:10:00.996429+0000 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr:/runner/scratch/rpms/ceph-debug/20.2.0-8-g0597158282e/BUILD/ceph-20.2.0-8-g0597158282e/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-8-g0597158282e (0597158282e6d69429e60df2354a6c8eed0e5bce) tentacle (stable - RelWithDebInfo) 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f00c25901fd] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 2: ceph-mon(+0x2a61ac) [0x556bac4341ac] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 3: (Elector::persist_connectivity_scores()+0x135) [0x556bac517865] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 4: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x556bac520901] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x556bac51ccd0] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 6: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x556bac51d887] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x556bac48ce3d] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 8: (Monitor::_ms_dispatch(Message*)+0x786) [0x556bac4815c6] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 9: ceph-mon(+0x2b333c) [0x556bac44133c] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 10: (DispatchQueue::entry()+0x4a8) [0x7f00c2806848] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 11: /usr/lib64/ceph/libceph-common.so.2(+0x49ac51) [0x7f00c289ac51] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 12: /lib64/libc.so.6(+0x8b2ea) [0x7f00c168b2ea] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 13: /lib64/libc.so.6(+0x1103c0) [0x7f00c17103c0] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-8-g0597158282e (0597158282e6d69429e60df2354a6c8eed0e5bce) tentacle (stable - RelWithDebInfo) 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f00c163fc30] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 2: /lib64/libc.so.6(+0x8d02c) [0x7f00c168d02c] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 3: raise() 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 4: abort() 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f00c25902ba] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 6: ceph-mon(+0x2a61ac) [0x556bac4341ac] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (Elector::persist_connectivity_scores()+0x135) [0x556bac517865] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 8: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x556bac520901] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 9: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x556bac51ccd0] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 10: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x556bac51d887] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 11: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x556bac48ce3d] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 12: (Monitor::_ms_dispatch(Message*)+0x786) [0x556bac4815c6] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 13: ceph-mon(+0x2b333c) [0x556bac44133c] 2026-04-01T10:10:00.999 INFO:tasks.ceph.mon.c.vm00.stderr: 14: (DispatchQueue::entry()+0x4a8) [0x7f00c2806848] 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 15: /usr/lib64/ceph/libceph-common.so.2(+0x49ac51) [0x7f00c289ac51] 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 16: /lib64/libc.so.6(+0x8b2ea) [0x7f00c168b2ea] 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 17: /lib64/libc.so.6(+0x1103c0) [0x7f00c17103c0] 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr:2026-04-01T10:10:00.998+0000 7f00b9edd640 -1 *** Caught signal (Aborted) ** 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: in thread 7f00b9edd640 thread_name:ms_dispatch 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-8-g0597158282e (0597158282e6d69429e60df2354a6c8eed0e5bce) tentacle (stable - RelWithDebInfo) 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f00c163fc30] 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 2: /lib64/libc.so.6(+0x8d02c) [0x7f00c168d02c] 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 3: raise() 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 4: abort() 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f00c25902ba] 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 6: ceph-mon(+0x2a61ac) [0x556bac4341ac] 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (Elector::persist_connectivity_scores()+0x135) [0x556bac517865] 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 8: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x556bac520901] 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 9: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x556bac51ccd0] 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 10: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x556bac51d887] 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 11: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x556bac48ce3d] 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 12: (Monitor::_ms_dispatch(Message*)+0x786) [0x556bac4815c6] 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 13: ceph-mon(+0x2b333c) [0x556bac44133c] 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 14: (DispatchQueue::entry()+0x4a8) [0x7f00c2806848] 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 15: /usr/lib64/ceph/libceph-common.so.2(+0x49ac51) [0x7f00c289ac51] 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 16: /lib64/libc.so.6(+0x8b2ea) [0x7f00c168b2ea] 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 17: /lib64/libc.so.6(+0x1103c0) [0x7f00c17103c0] 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-04-01T10:10:01.000 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: -2> 2026-04-01T10:10:00.994+0000 7f00b9edd640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-c/store.db/000022.log: No space left on device code =  Rocksdb transaction: 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr:PutCF( prefix = monitor key = 'connectivity_scores' value size = 238) 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: -1> 2026-04-01T10:10:00.997+0000 7f00b9edd640 -1 /runner/scratch/rpms/ceph-debug/20.2.0-8-g0597158282e/BUILD/ceph-20.2.0-8-g0597158282e/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f00b9edd640 time 2026-04-01T10:10:00.996429+0000 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr:/runner/scratch/rpms/ceph-debug/20.2.0-8-g0597158282e/BUILD/ceph-20.2.0-8-g0597158282e/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-8-g0597158282e (0597158282e6d69429e60df2354a6c8eed0e5bce) tentacle (stable - RelWithDebInfo) 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f00c25901fd] 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: 2: ceph-mon(+0x2a61ac) [0x556bac4341ac] 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: 3: (Elector::persist_connectivity_scores()+0x135) [0x556bac517865] 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: 4: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x556bac520901] 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x556bac51ccd0] 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: 6: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x556bac51d887] 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x556bac48ce3d] 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: 8: (Monitor::_ms_dispatch(Message*)+0x786) [0x556bac4815c6] 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: 9: ceph-mon(+0x2b333c) [0x556bac44133c] 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: 10: (DispatchQueue::entry()+0x4a8) [0x7f00c2806848] 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: 11: /usr/lib64/ceph/libceph-common.so.2(+0x49ac51) [0x7f00c289ac51] 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: 12: /lib64/libc.so.6(+0x8b2ea) [0x7f00c168b2ea] 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: 13: /lib64/libc.so.6(+0x1103c0) [0x7f00c17103c0] 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: 0> 2026-04-01T10:10:00.998+0000 7f00b9edd640 -1 *** Caught signal (Aborted) ** 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: in thread 7f00b9edd640 thread_name:ms_dispatch 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-8-g0597158282e (0597158282e6d69429e60df2354a6c8eed0e5bce) tentacle (stable - RelWithDebInfo) 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f00c163fc30] 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: 2: /lib64/libc.so.6(+0x8d02c) [0x7f00c168d02c] 2026-04-01T10:10:01.024 INFO:tasks.ceph.mon.c.vm00.stderr: 3: raise() 2026-04-01T10:10:01.025 INFO:tasks.ceph.mon.c.vm00.stderr: 4: abort() 2026-04-01T10:10:01.025 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f00c25902ba] 2026-04-01T10:10:01.025 INFO:tasks.ceph.mon.c.vm00.stderr: 6: ceph-mon(+0x2a61ac) [0x556bac4341ac] 2026-04-01T10:10:01.025 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (Elector::persist_connectivity_scores()+0x135) [0x556bac517865] 2026-04-01T10:10:01.025 INFO:tasks.ceph.mon.c.vm00.stderr: 8: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x556bac520901] 2026-04-01T10:10:01.025 INFO:tasks.ceph.mon.c.vm00.stderr: 9: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x556bac51ccd0] 2026-04-01T10:10:01.025 INFO:tasks.ceph.mon.c.vm00.stderr: 10: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x556bac51d887] 2026-04-01T10:10:01.025 INFO:tasks.ceph.mon.c.vm00.stderr: 11: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x556bac48ce3d] 2026-04-01T10:10:01.025 INFO:tasks.ceph.mon.c.vm00.stderr: 12: (Monitor::_ms_dispatch(Message*)+0x786) [0x556bac4815c6] 2026-04-01T10:10:01.025 INFO:tasks.ceph.mon.c.vm00.stderr: 13: ceph-mon(+0x2b333c) [0x556bac44133c] 2026-04-01T10:10:01.025 INFO:tasks.ceph.mon.c.vm00.stderr: 14: (DispatchQueue::entry()+0x4a8) [0x7f00c2806848] 2026-04-01T10:10:01.025 INFO:tasks.ceph.mon.c.vm00.stderr: 15: /usr/lib64/ceph/libceph-common.so.2(+0x49ac51) [0x7f00c289ac51] 2026-04-01T10:10:01.025 INFO:tasks.ceph.mon.c.vm00.stderr: 16: /lib64/libc.so.6(+0x8b2ea) [0x7f00c168b2ea] 2026-04-01T10:10:01.025 INFO:tasks.ceph.mon.c.vm00.stderr: 17: /lib64/libc.so.6(+0x1103c0) [0x7f00c17103c0] 2026-04-01T10:10:01.025 INFO:tasks.ceph.mon.c.vm00.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-04-01T10:10:01.025 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-04-01T10:10:01.025 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.025 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.027 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.028 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.028 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.028 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.028 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.028 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.028 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.028 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.028 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.028 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.028 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.028 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.028 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.028 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.028 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.028 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.028 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.029 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.030 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: -9999> 2026-04-01T10:10:00.994+0000 7f00b9edd640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-c/store.db/000022.log: No space left on device code =  Rocksdb transaction: 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:PutCF( prefix = monitor key = 'connectivity_scores' value size = 238) 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: -9998> 2026-04-01T10:10:00.997+0000 7f00b9edd640 -1 /runner/scratch/rpms/ceph-debug/20.2.0-8-g0597158282e/BUILD/ceph-20.2.0-8-g0597158282e/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f00b9edd640 time 2026-04-01T10:10:00.996429+0000 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr:/runner/scratch/rpms/ceph-debug/20.2.0-8-g0597158282e/BUILD/ceph-20.2.0-8-g0597158282e/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-8-g0597158282e (0597158282e6d69429e60df2354a6c8eed0e5bce) tentacle (stable - RelWithDebInfo) 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f00c25901fd] 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: 2: ceph-mon(+0x2a61ac) [0x556bac4341ac] 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: 3: (Elector::persist_connectivity_scores()+0x135) [0x556bac517865] 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: 4: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x556bac520901] 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x556bac51ccd0] 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: 6: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x556bac51d887] 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x556bac48ce3d] 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: 8: (Monitor::_ms_dispatch(Message*)+0x786) [0x556bac4815c6] 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: 9: ceph-mon(+0x2b333c) [0x556bac44133c] 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: 10: (DispatchQueue::entry()+0x4a8) [0x7f00c2806848] 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: 11: /usr/lib64/ceph/libceph-common.so.2(+0x49ac51) [0x7f00c289ac51] 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: 12: /lib64/libc.so.6(+0x8b2ea) [0x7f00c168b2ea] 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: 13: /lib64/libc.so.6(+0x1103c0) [0x7f00c17103c0] 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: -9997> 2026-04-01T10:10:00.998+0000 7f00b9edd640 -1 *** Caught signal (Aborted) ** 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: in thread 7f00b9edd640 thread_name:ms_dispatch 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-8-g0597158282e (0597158282e6d69429e60df2354a6c8eed0e5bce) tentacle (stable - RelWithDebInfo) 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f00c163fc30] 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: 2: /lib64/libc.so.6(+0x8d02c) [0x7f00c168d02c] 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: 3: raise() 2026-04-01T10:10:01.031 INFO:tasks.ceph.mon.c.vm00.stderr: 4: abort() 2026-04-01T10:10:01.032 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f00c25902ba] 2026-04-01T10:10:01.032 INFO:tasks.ceph.mon.c.vm00.stderr: 6: ceph-mon(+0x2a61ac) [0x556bac4341ac] 2026-04-01T10:10:01.032 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (Elector::persist_connectivity_scores()+0x135) [0x556bac517865] 2026-04-01T10:10:01.032 INFO:tasks.ceph.mon.c.vm00.stderr: 8: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x556bac520901] 2026-04-01T10:10:01.032 INFO:tasks.ceph.mon.c.vm00.stderr: 9: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x556bac51ccd0] 2026-04-01T10:10:01.032 INFO:tasks.ceph.mon.c.vm00.stderr: 10: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x556bac51d887] 2026-04-01T10:10:01.032 INFO:tasks.ceph.mon.c.vm00.stderr: 11: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x556bac48ce3d] 2026-04-01T10:10:01.032 INFO:tasks.ceph.mon.c.vm00.stderr: 12: (Monitor::_ms_dispatch(Message*)+0x786) [0x556bac4815c6] 2026-04-01T10:10:01.032 INFO:tasks.ceph.mon.c.vm00.stderr: 13: ceph-mon(+0x2b333c) [0x556bac44133c] 2026-04-01T10:10:01.032 INFO:tasks.ceph.mon.c.vm00.stderr: 14: (DispatchQueue::entry()+0x4a8) [0x7f00c2806848] 2026-04-01T10:10:01.032 INFO:tasks.ceph.mon.c.vm00.stderr: 15: /usr/lib64/ceph/libceph-common.so.2(+0x49ac51) [0x7f00c289ac51] 2026-04-01T10:10:01.032 INFO:tasks.ceph.mon.c.vm00.stderr: 16: /lib64/libc.so.6(+0x8b2ea) [0x7f00c168b2ea] 2026-04-01T10:10:01.032 INFO:tasks.ceph.mon.c.vm00.stderr: 17: /lib64/libc.so.6(+0x1103c0) [0x7f00c17103c0] 2026-04-01T10:10:01.032 INFO:tasks.ceph.mon.c.vm00.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-04-01T10:10:01.032 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-04-01T10:10:01.096 INFO:tasks.ceph.mon.c.vm00.stderr:daemon-helper: command crashed with signal 6 2026-04-01T10:10:03.463 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~0s 2026-04-01T10:10:03.463 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~0s 2026-04-01T10:10:09.873 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~6s 2026-04-01T10:10:09.873 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~6s 2026-04-01T10:10:16.282 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~13s 2026-04-01T10:10:16.282 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~13s 2026-04-01T10:10:22.696 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~19s 2026-04-01T10:10:22.696 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~19s 2026-04-01T10:10:22.843 INFO:tasks.ceph.osd.5.vm03.stderr:problem writing to /var/log/ceph/ceph-osd.5.log: (28) No space left on device 2026-04-01T10:10:22.844 INFO:tasks.ceph.osd.6.vm03.stderr:problem writing to /var/log/ceph/ceph-osd.6.log: (28) No space left on device 2026-04-01T10:10:22.844 INFO:tasks.ceph.osd.5.vm03.stderr:problem writing to /var/log/ceph/ceph-osd.5.log: (28) No space left on device 2026-04-01T10:10:22.845 INFO:tasks.ceph.osd.4.vm03.stderr:problem writing to /var/log/ceph/ceph-osd.4.log: (28) No space left on device 2026-04-01T10:10:22.845 INFO:tasks.ceph.osd.7.vm03.stderr:problem writing to /var/log/ceph/ceph-osd.7.log: (28) No space left on device 2026-04-01T10:10:22.845 INFO:tasks.ceph.osd.6.vm03.stderr:problem writing to /var/log/ceph/ceph-osd.6.log: (28) No space left on device 2026-04-01T10:10:22.881 INFO:tasks.ceph.mon.b.vm03.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-04-01T10:10:23.717 INFO:tasks.ceph.mgr.x.vm03.stderr:problem writing to /var/log/ceph/ceph-mgr.x.log: (28) No space left on device 2026-04-01T10:10:29.102 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~26s 2026-04-01T10:10:29.102 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~26s 2026-04-01T10:10:35.509 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~32s 2026-04-01T10:10:35.509 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~32s 2026-04-01T10:10:38.734 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:10:38.734+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 14 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:10:41.916 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~38s 2026-04-01T10:10:41.916 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~38s 2026-04-01T10:10:43.735 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:10:43.734+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 35 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:10:48.324 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~45s 2026-04-01T10:10:48.324 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~45s 2026-04-01T10:10:48.735 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:10:48.735+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 54 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:10:53.736 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:10:53.735+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 75 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:10:54.735 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~51s 2026-04-01T10:10:54.735 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~51s 2026-04-01T10:10:58.736 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:10:58.736+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 102 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:11:01.142 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~58s 2026-04-01T10:11:01.142 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~58s 2026-04-01T10:11:03.737 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:11:03.736+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 132 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:11:07.554 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~64s 2026-04-01T10:11:07.554 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~64s 2026-04-01T10:11:08.737 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:11:08.736+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 158 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:11:13.738 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:11:13.737+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 187 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:11:13.962 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~71s 2026-04-01T10:11:13.962 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~71s 2026-04-01T10:11:16.027 INFO:tasks.rgw.client.1.vm03.stdout:problem writing to /var/log/ceph/rgw.ceph.client.1.log: (28) No space left on device 2026-04-01T10:11:16.027 INFO:tasks.rgw.client.1.vm03.stdout:tee: /var/log/ceph/rgw.ceph.client.1.stdout: No space left on device 2026-04-01T10:11:18.738 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:11:18.737+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 214 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:11:20.374 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~77s 2026-04-01T10:11:20.374 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~77s 2026-04-01T10:11:23.740 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:11:23.738+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 245 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:11:26.784 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~83s 2026-04-01T10:11:26.784 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~83s 2026-04-01T10:11:28.739 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:11:28.739+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 271 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:11:33.191 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~90s 2026-04-01T10:11:33.192 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~90s 2026-04-01T10:11:33.740 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:11:33.739+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 350 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:11:38.741 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:11:38.740+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 378 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:11:39.598 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~96s 2026-04-01T10:11:39.598 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~96s 2026-04-01T10:11:43.741 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:11:43.740+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 412 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:11:46.005 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~103s 2026-04-01T10:11:46.005 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~103s 2026-04-01T10:11:48.741 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:11:48.741+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 438 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:11:52.411 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~109s 2026-04-01T10:11:52.411 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~109s 2026-04-01T10:11:53.742 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:11:53.742+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 468 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:11:58.744 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:11:58.743+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 494 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:11:58.818 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~115s 2026-04-01T10:11:58.819 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~115s 2026-04-01T10:12:03.745 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:12:03.743+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 523 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:12:05.225 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~122s 2026-04-01T10:12:05.225 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~122s 2026-04-01T10:12:08.745 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:12:08.744+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 580 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:12:11.636 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~128s 2026-04-01T10:12:11.636 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~128s 2026-04-01T10:12:13.747 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:12:13.744+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 605 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:12:18.046 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~135s 2026-04-01T10:12:18.046 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~135s 2026-04-01T10:12:18.746 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:12:18.745+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 633 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:12:23.748 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:12:23.745+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 662 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:12:24.455 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~141s 2026-04-01T10:12:24.456 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~141s 2026-04-01T10:12:28.747 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:12:28.746+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 688 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:12:30.867 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~147s 2026-04-01T10:12:30.867 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~147s 2026-04-01T10:12:33.748 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:12:33.746+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 717 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:12:37.276 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~154s 2026-04-01T10:12:37.276 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~154s 2026-04-01T10:12:38.748 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:12:38.747+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 744 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:12:43.684 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~160s 2026-04-01T10:12:43.684 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~160s 2026-04-01T10:12:43.751 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:12:43.747+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 775 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:12:48.749 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:12:48.748+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 800 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:12:50.092 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~167s 2026-04-01T10:12:50.092 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~167s 2026-04-01T10:12:53.749 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:12:53.748+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 829 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:12:56.499 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~173s 2026-04-01T10:12:56.499 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~173s 2026-04-01T10:12:58.749 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:12:58.749+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 856 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:13:02.905 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~179s 2026-04-01T10:13:02.905 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~179s 2026-04-01T10:13:03.750 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:13:03.749+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 889 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:13:08.751 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:13:08.750+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 913 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:13:09.315 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~186s 2026-04-01T10:13:09.315 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~186s 2026-04-01T10:13:13.752 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:13:13.750+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 941 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:13:15.727 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~192s 2026-04-01T10:13:15.727 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~192s 2026-04-01T10:13:18.751 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:13:18.751+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 968 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:13:22.133 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~199s 2026-04-01T10:13:22.133 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~199s 2026-04-01T10:13:23.752 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:13:23.751+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 997 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:13:28.540 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~205s 2026-04-01T10:13:28.540 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~205s 2026-04-01T10:13:28.753 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:13:28.752+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 1026 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:13:33.753 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:13:33.752+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 1058 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:13:34.946 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~211s 2026-04-01T10:13:34.946 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~211s 2026-04-01T10:13:38.754 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:13:38.753+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 1082 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:13:41.353 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~218s 2026-04-01T10:13:41.353 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~218s 2026-04-01T10:13:43.754 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:13:43.753+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 1114 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:13:47.758 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~224s 2026-04-01T10:13:47.758 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~224s 2026-04-01T10:13:48.755 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:13:48.754+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 1138 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:13:53.756 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:13:53.755+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 1165 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:13:54.170 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~231s 2026-04-01T10:13:54.170 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~231s 2026-04-01T10:13:58.756 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:13:58.755+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 1194 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:14:00.583 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~237s 2026-04-01T10:14:00.583 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~237s 2026-04-01T10:14:03.757 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:14:03.756+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 1241 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:14:06.988 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~244s 2026-04-01T10:14:06.988 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~244s 2026-04-01T10:14:08.758 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:14:08.758+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 1338 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:14:13.400 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~250s 2026-04-01T10:14:13.400 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~250s 2026-04-01T10:14:13.759 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:14:13.758+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 1435 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:14:18.760 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:14:18.759+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 1541 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:14:19.811 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~256s 2026-04-01T10:14:19.811 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~256s 2026-04-01T10:14:23.761 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:14:23.759+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 1633 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:14:26.218 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~263s 2026-04-01T10:14:26.218 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~263s 2026-04-01T10:14:28.761 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:14:28.760+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 1734 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:14:32.624 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~269s 2026-04-01T10:14:32.624 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~269s 2026-04-01T10:14:33.762 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:14:33.761+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 1832 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:14:38.762 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:14:38.761+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 1930 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:14:39.035 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~276s 2026-04-01T10:14:39.035 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~276s 2026-04-01T10:14:43.763 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:14:43.762+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 2021 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:14:45.447 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~282s 2026-04-01T10:14:45.448 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~282s 2026-04-01T10:14:48.763 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:14:48.763+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 2124 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:14:51.855 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~288s 2026-04-01T10:14:51.855 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~288s 2026-04-01T10:14:53.764 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:14:53.764+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 2225 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:14:58.261 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~295s 2026-04-01T10:14:58.262 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~295s 2026-04-01T10:14:58.767 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:14:58.764+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 2320 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:15:03.767 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:15:03.766+0000 7f057ede0640 -1 mon.b@1(probing) e1 get_health_metrics reporting 2429 slow ops, oldest is monmgrreport(gid 4105, 0 checks, 0 progress events) 2026-04-01T10:15:04.669 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~301s 2026-04-01T10:15:04.669 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~301s 2026-04-01T10:15:04.669 INFO:tasks.daemonwatchdog.daemon_watchdog:BARK! unmounting mounts and killing all daemons 2026-04-01T10:15:06.076 INFO:tasks.ceph.osd.0:Sent signal 15 2026-04-01T10:15:06.076 INFO:tasks.ceph.osd.1:Sent signal 15 2026-04-01T10:15:06.077 INFO:tasks.ceph.osd.2:Sent signal 15 2026-04-01T10:15:06.077 INFO:tasks.ceph.osd.3:Sent signal 15 2026-04-01T10:15:06.077 INFO:tasks.ceph.osd.4:Sent signal 15 2026-04-01T10:15:06.077 INFO:tasks.ceph.osd.5:Sent signal 15 2026-04-01T10:15:06.077 INFO:tasks.ceph.osd.6:Sent signal 15 2026-04-01T10:15:06.077 INFO:tasks.ceph.osd.7:Sent signal 15 2026-04-01T10:15:06.077 INFO:tasks.ceph.mon.b:Sent signal 15 2026-04-01T10:15:06.077 INFO:tasks.rgw.client.0:Sent signal 15 2026-04-01T10:15:06.077 INFO:tasks.rgw.client.1:Sent signal 15 2026-04-01T10:15:06.077 INFO:tasks.rgw.client.2:Sent signal 15 2026-04-01T10:15:06.077 INFO:tasks.ceph.mgr.y:Sent signal 15 2026-04-01T10:15:06.077 INFO:tasks.ceph.mgr.x:Sent signal 15 2026-04-01T10:15:06.077 INFO:tasks.ceph.osd.5.vm03.stderr:2026-04-01T10:15:06.077+0000 7f9265f23640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 5 (PID: 59302) UID: 0 2026-04-01T10:15:06.077 INFO:tasks.ceph.osd.5.vm03.stderr:2026-04-01T10:15:06.077+0000 7f9265f23640 -1 osd.5 71 *** Got signal Terminated *** 2026-04-01T10:15:06.077 INFO:tasks.ceph.osd.5.vm03.stderr:2026-04-01T10:15:06.077+0000 7f9265f23640 -1 osd.5 71 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-04-01T10:15:06.077 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:15:06.077+0000 7f0581de6640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-mon -f --cluster ceph -i b (PID: 58639) UID: 0 2026-04-01T10:15:06.077 INFO:tasks.ceph.mon.b.vm03.stderr:2026-04-01T10:15:06.077+0000 7f0581de6640 -1 mon.b@1(probing) e1 *** Got Signal Terminated *** 2026-04-01T10:15:06.078 INFO:tasks.ceph.osd.6.vm03.stderr:2026-04-01T10:15:06.077+0000 7f9fbf8cf640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 6 (PID: 59301) UID: 0 2026-04-01T10:15:06.078 INFO:tasks.ceph.osd.6.vm03.stderr:2026-04-01T10:15:06.077+0000 7f9fbf8cf640 -1 osd.6 71 *** Got signal Terminated *** 2026-04-01T10:15:06.078 INFO:tasks.ceph.osd.6.vm03.stderr:2026-04-01T10:15:06.077+0000 7f9fbf8cf640 -1 osd.6 71 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-04-01T10:15:06.078 INFO:tasks.ceph.osd.7.vm03.stderr:2026-04-01T10:15:06.077+0000 7f71eee80640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 7 (PID: 59307) UID: 0 2026-04-01T10:15:06.078 INFO:tasks.ceph.osd.7.vm03.stderr:2026-04-01T10:15:06.077+0000 7f71eee80640 -1 osd.7 71 *** Got signal Terminated *** 2026-04-01T10:15:06.078 INFO:tasks.ceph.osd.7.vm03.stderr:2026-04-01T10:15:06.077+0000 7f71eee80640 -1 osd.7 71 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-04-01T10:15:06.078 INFO:tasks.ceph.osd.4.vm03.stderr:2026-04-01T10:15:06.077+0000 7f9b76ea7640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 4 (PID: 59309) UID: 0 2026-04-01T10:15:06.078 INFO:tasks.ceph.osd.4.vm03.stderr:2026-04-01T10:15:06.077+0000 7f9b76ea7640 -1 osd.4 71 *** Got signal Terminated *** 2026-04-01T10:15:06.078 INFO:tasks.ceph.osd.4.vm03.stderr:2026-04-01T10:15:06.077+0000 7f9b76ea7640 -1 osd.4 71 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-04-01T10:15:06.078 INFO:tasks.rgw.client.1.vm03.stdout:2026-04-01T10:15:06.078+0000 7f84fb312640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper term radosgw --rgw-frontends beast port=80 -n client.1 --cluster ceph -k /etc/ceph/ceph.client.1.keyring --log-file /var/log/ceph/rgw.ceph.client.1.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.1.sock --foreground (PID: 63546) UID: 0 2026-04-01T10:15:06.078 INFO:tasks.rgw.client.1.vm03.stdout:2026-04-01T10:15:06.078+0000 7f8500b23980 -1 shutting down 2026-04-01T10:15:06.078 INFO:tasks.ceph.osd.0.vm00.stderr:2026-04-01T10:15:06.078+0000 7f6f87978640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 0 (PID: 62641) UID: 0 2026-04-01T10:15:06.078 INFO:tasks.ceph.osd.0.vm00.stderr:2026-04-01T10:15:06.078+0000 7f6f87978640 -1 osd.0 71 *** Got signal Terminated *** 2026-04-01T10:15:06.078 INFO:tasks.ceph.osd.1.vm00.stderr:2026-04-01T10:15:06.078+0000 7f3f8c778640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 1 (PID: 62647) UID: 0 2026-04-01T10:15:06.078 INFO:tasks.ceph.osd.1.vm00.stderr:2026-04-01T10:15:06.078+0000 7f3f8c778640 -1 osd.1 71 *** Got signal Terminated *** 2026-04-01T10:15:06.078 INFO:tasks.ceph.osd.2.vm00.stderr:2026-04-01T10:15:06.078+0000 7fd3d9fcd640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 2 (PID: 62653) UID: 0 2026-04-01T10:15:06.078 INFO:tasks.ceph.osd.2.vm00.stderr:2026-04-01T10:15:06.078+0000 7fd3d9fcd640 -1 osd.2 71 *** Got signal Terminated *** 2026-04-01T10:15:06.079 INFO:tasks.ceph.osd.3.vm00.stderr:2026-04-01T10:15:06.078+0000 7fa985e72640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 3 (PID: 62656) UID: 0 2026-04-01T10:15:06.079 INFO:tasks.ceph.osd.3.vm00.stderr:2026-04-01T10:15:06.078+0000 7fa985e72640 -1 osd.3 71 *** Got signal Terminated *** 2026-04-01T10:15:06.079 INFO:tasks.ceph.osd.0.vm00.stderr:2026-04-01T10:15:06.078+0000 7f6f87978640 -1 osd.0 71 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-04-01T10:15:06.079 INFO:tasks.ceph.osd.2.vm00.stderr:2026-04-01T10:15:06.078+0000 7fd3d9fcd640 -1 osd.2 71 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-04-01T10:15:06.079 INFO:tasks.rgw.client.2.vm07.stdout:2026-04-01T10:15:06.079+0000 7efe3f555640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper term radosgw --rgw-frontends beast port=80 -n client.2 --cluster ceph -k /etc/ceph/ceph.client.2.keyring --log-file /var/log/ceph/rgw.ceph.client.2.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.2.sock --foreground (PID: 52782) UID: 0 2026-04-01T10:15:06.079 INFO:tasks.rgw.client.2.vm07.stdout:2026-04-01T10:15:06.079+0000 7efe44d24980 -1 shutting down 2026-04-01T10:15:06.085 INFO:tasks.ceph.osd.1.vm00.stderr:2026-04-01T10:15:06.084+0000 7f3f8c778640 -1 osd.1 71 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-04-01T10:15:06.085 INFO:tasks.ceph.osd.3.vm00.stderr:2026-04-01T10:15:06.084+0000 7fa985e72640 -1 osd.3 71 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-04-01T10:15:06.085 INFO:tasks.rgw.client.0.vm00.stdout:2026-04-01T10:15:06.078+0000 7fd0f3e7c640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper term radosgw --rgw-frontends beast port=80 -n client.0 --cluster ceph -k /etc/ceph/ceph.client.0.keyring --log-file /var/log/ceph/rgw.ceph.client.0.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.0.sock --foreground (PID: 68684) UID: 0 2026-04-01T10:15:06.085 INFO:tasks.rgw.client.0.vm00.stdout:2026-04-01T10:15:06.079+0000 7fd0f9523980 -1 shutting down 2026-04-01T10:15:06.279 INFO:tasks.ceph.mgr.y.vm00.stderr:daemon-helper: command crashed with signal 15 2026-04-01T10:15:35.109 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_large_scale 2026-04-01T10:15:35.109 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-04-01T10:15:35.109 INFO:teuthology.orchestra.run.vm00.stdout:WARNING dedup.test_dedup:test_dedup.py:2748 test_dedup_dry_large_scale: failed!! 2026-04-01T10:15:41.118 INFO:teuthology.orchestra.run.vm00.stdout:FAILED [ 97%] 2026-04-01T10:15:41.121 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_cleanup PASSED [100%] 2026-04-01T10:15:41.121 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.121 INFO:teuthology.orchestra.run.vm00.stdout:=================================== FAILURES =================================== 2026-04-01T10:15:41.121 INFO:teuthology.orchestra.run.vm00.stdout:__________________________ test_dedup_dry_large_scale __________________________ 2026-04-01T10:15:41.121 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.121 INFO:teuthology.orchestra.run.vm00.stdout:self = 2026-04-01T10:15:41.121 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.121 INFO:teuthology.orchestra.run.vm00.stdout: def _new_conn(self): 2026-04-01T10:15:41.121 INFO:teuthology.orchestra.run.vm00.stdout: """Establish a socket connection and set nodelay settings on it. 2026-04-01T10:15:41.121 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.121 INFO:teuthology.orchestra.run.vm00.stdout: :return: New socket connection. 2026-04-01T10:15:41.121 INFO:teuthology.orchestra.run.vm00.stdout: """ 2026-04-01T10:15:41.121 INFO:teuthology.orchestra.run.vm00.stdout: extra_kw = {} 2026-04-01T10:15:41.121 INFO:teuthology.orchestra.run.vm00.stdout: if self.source_address: 2026-04-01T10:15:41.121 INFO:teuthology.orchestra.run.vm00.stdout: extra_kw["source_address"] = self.source_address 2026-04-01T10:15:41.121 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.121 INFO:teuthology.orchestra.run.vm00.stdout: if self.socket_options: 2026-04-01T10:15:41.121 INFO:teuthology.orchestra.run.vm00.stdout: extra_kw["socket_options"] = self.socket_options 2026-04-01T10:15:41.121 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout:> conn = connection.create_connection( 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: (self._dns_host, self.port), self.timeout, **extra_kw 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/connection.py:174: 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/util/connection.py:95: in create_connection 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: raise err 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout:address = ('vm00.local', 80), timeout = 60, source_address = None 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout:socket_options = [(6, 1, 1)] 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: def create_connection( 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: address, 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: timeout=socket._GLOBAL_DEFAULT_TIMEOUT, 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: source_address=None, 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: socket_options=None, 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: ): 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: """Connect to *address* and return the socket object. 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: Convenience function. Connect to *address* (a 2-tuple ``(host, 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: port)``) and return the socket object. Passing the optional 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: *timeout* parameter will set the timeout on the socket instance 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: before attempting to connect. If no *timeout* is supplied, the 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: global default timeout setting returned by :func:`socket.getdefaulttimeout` 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: is used. If *source_address* is set it must be a tuple of (host, port) 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: for the socket to bind as a source address before making the connection. 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: An host of '' or port 0 tells the OS to use the default. 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: """ 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.122 INFO:teuthology.orchestra.run.vm00.stdout: host, port = address 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: if host.startswith("["): 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: host = host.strip("[]") 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: err = None 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: # Using the value from allowed_gai_family() in the context of getaddrinfo lets 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: # us select whether to work with IPv4 DNS records, IPv6 records, or both. 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: # The original create_connection function always returns all records. 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: family = allowed_gai_family() 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: host.encode("idna") 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: except UnicodeError: 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: return six.raise_from( 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: LocationParseError(u"'%s', label empty or too long" % host), None 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: af, socktype, proto, canonname, sa = res 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: sock = None 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: sock = socket.socket(af, socktype, proto) 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: # If provided, set socket level options before connecting. 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: _set_socket_options(sock, socket_options) 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: sock.settimeout(timeout) 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: if source_address: 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: sock.bind(source_address) 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout:> sock.connect(sa) 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout:E ConnectionRefusedError: [Errno 111] Connection refused 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/util/connection.py:85: ConnectionRefusedError 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout:During handling of the above exception, another exception occurred: 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout:self = 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout:request = 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: def send(self, request): 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: proxy_url = self._proxy_config.proxy_url_for(request.url) 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: manager = self._get_connection_manager(request.url, proxy_url) 2026-04-01T10:15:41.123 INFO:teuthology.orchestra.run.vm00.stdout: conn = manager.connection_from_url(request.url) 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: self._setup_ssl_cert(conn, request.url, self._verify) 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: if ensure_boolean( 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: os.environ.get('BOTO_EXPERIMENTAL__ADD_PROXY_HOST_HEADER', '') 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: ): 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: # This is currently an "experimental" feature which provides 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: # no guarantees of backwards compatibility. It may be subject 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: # to change or removal in any patch version. Anyone opting in 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: # to this feature should strictly pin botocore. 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: host = urlparse(request.url).hostname 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: conn.proxy_headers['host'] = host 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: request_target = self._get_request_target(request.url, proxy_url) 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout:> urllib_response = conn.urlopen( 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: method=request.method, 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: url=request_target, 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: body=request.body, 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: headers=request.headers, 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: retries=Retry(False), 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: assert_same_host=False, 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: preload_content=False, 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: decode_content=False, 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: chunked=self._chunked(request.headers), 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/httpsession.py:477: 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/connectionpool.py:802: in urlopen 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: retries = retries.increment( 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/util/retry.py:527: in increment 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: raise six.reraise(type(error), error, _stacktrace) 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/packages/six.py:770: in reraise 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: raise value 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/connectionpool.py:716: in urlopen 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: httplib_response = self._make_request( 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/connectionpool.py:416: in _make_request 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: conn.request(method, url, **httplib_request_kw) 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/awsrequest.py:96: in request 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: rval = super().request(method, url, body, headers, *args, **kwargs) 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/connection.py:244: in request 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: super(HTTPConnection, self).request(method, url, body=body, headers=headers) 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout:/usr/lib64/python3.9/http/client.py:1285: in request 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: self._send_request(method, url, body, headers, encode_chunked) 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout:/usr/lib64/python3.9/http/client.py:1331: in _send_request 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: self.endheaders(body, encode_chunked=encode_chunked) 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout:/usr/lib64/python3.9/http/client.py:1280: in endheaders 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: self._send_output(message_body, encode_chunked=encode_chunked) 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/awsrequest.py:123: in _send_output 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: self.send(msg) 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/awsrequest.py:223: in send 2026-04-01T10:15:41.124 INFO:teuthology.orchestra.run.vm00.stdout: return super().send(str) 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout:/usr/lib64/python3.9/http/client.py:980: in send 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: self.connect() 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/connection.py:205: in connect 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: conn = self._new_conn() 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout:self = 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: def _new_conn(self): 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: """Establish a socket connection and set nodelay settings on it. 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: :return: New socket connection. 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: """ 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: extra_kw = {} 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: if self.source_address: 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: extra_kw["source_address"] = self.source_address 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: if self.socket_options: 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: extra_kw["socket_options"] = self.socket_options 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: conn = connection.create_connection( 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: (self._dns_host, self.port), self.timeout, **extra_kw 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: except SocketTimeout: 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: raise ConnectTimeoutError( 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: self, 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: "Connection to %s timed out. (connect timeout=%s)" 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: % (self.host, self.timeout), 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: except SocketError as e: 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout:> raise NewConnectionError( 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: self, "Failed to establish a new connection: %s" % e 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout:E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/connection.py:186: NewConnectionError 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout:During handling of the above exception, another exception occurred: 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: @pytest.mark.basic_test 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: def test_dedup_dry_large_scale(): 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: #return 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: prepare_test() 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: max_copies_count=3 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: num_threads=64 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: num_files=32*1024 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: size=1*KB 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: files=[] 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: config=TransferConfig(multipart_threshold=size, multipart_chunksize=1*MB) 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: log.debug("test_dedup_dry_large_scale_new: connect to AWS ...") 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: gen_files_fixed_size(files, num_files, size, max_copies_count) 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: conns=get_connections(num_threads) 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: bucket_names=get_buckets(num_threads) 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: for i in range(num_threads): 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: conns[i].create_bucket(Bucket=bucket_names[i]) 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: threads_simple_dedup_with_tenants(files, conns, bucket_names, config, True) 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: except: 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: log.warning("test_dedup_dry_large_scale: failed!!") 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: finally: 2026-04-01T10:15:41.125 INFO:teuthology.orchestra.run.vm00.stdout: # cleanup must be executed even after a failure 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:> cleanup_all_buckets(bucket_names, conns) 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py:2751: 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py:496: in cleanup_all_buckets 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: delete_bucket_with_all_objects(bucket_name, conn) 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py:452: in delete_bucket_with_all_objects 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: listing=conn.list_objects(Bucket=bucket_name, Marker=marker, MaxKeys=max_keys) 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/client.py:602: in _api_call 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: return self._make_api_call(operation_name, kwargs) 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/context.py:123: in wrapper 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: return func(*args, **kwargs) 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/client.py:1060: in _make_api_call 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: http, parsed_response = self._make_request( 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/client.py:1084: in _make_request 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: return self._endpoint.make_request(operation_model, request_dict) 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/endpoint.py:119: in make_request 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: return self._send_request(request_dict, operation_model) 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/endpoint.py:200: in _send_request 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: while self._needs_retry( 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/endpoint.py:360: in _needs_retry 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: responses = self._event_emitter.emit( 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/hooks.py:412: in emit 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: return self._emitter.emit(aliased_event_name, **kwargs) 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/hooks.py:256: in emit 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: return self._emit(event_name, kwargs) 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/hooks.py:239: in _emit 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: response = handler(**kwargs) 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/retryhandler.py:207: in __call__ 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: if self._checker(**checker_kwargs): 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/retryhandler.py:284: in __call__ 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: should_retry = self._should_retry( 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/retryhandler.py:320: in _should_retry 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: return self._checker(attempt_number, response, caught_exception) 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/retryhandler.py:363: in __call__ 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: checker_response = checker( 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/retryhandler.py:247: in __call__ 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: return self._check_caught_exception( 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/retryhandler.py:416: in _check_caught_exception 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: raise caught_exception 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/endpoint.py:279: in _do_get_response 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: http_response = self._send(request) 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/endpoint.py:383: in _send 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: return self.http_session.send(request) 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:self = 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout:request = 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: def send(self, request): 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: proxy_url = self._proxy_config.proxy_url_for(request.url) 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: manager = self._get_connection_manager(request.url, proxy_url) 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: conn = manager.connection_from_url(request.url) 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: self._setup_ssl_cert(conn, request.url, self._verify) 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: if ensure_boolean( 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: os.environ.get('BOTO_EXPERIMENTAL__ADD_PROXY_HOST_HEADER', '') 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: ): 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: # This is currently an "experimental" feature which provides 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: # no guarantees of backwards compatibility. It may be subject 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: # to change or removal in any patch version. Anyone opting in 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: # to this feature should strictly pin botocore. 2026-04-01T10:15:41.126 INFO:teuthology.orchestra.run.vm00.stdout: host = urlparse(request.url).hostname 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: conn.proxy_headers['host'] = host 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: request_target = self._get_request_target(request.url, proxy_url) 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: urllib_response = conn.urlopen( 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: method=request.method, 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: url=request_target, 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: body=request.body, 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: headers=request.headers, 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: retries=Retry(False), 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: assert_same_host=False, 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: preload_content=False, 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: decode_content=False, 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: chunked=self._chunked(request.headers), 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: http_response = botocore.awsrequest.AWSResponse( 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: request.url, 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: urllib_response.status, 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: urllib_response.headers, 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: urllib_response, 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: if not request.stream_output: 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: # Cause the raw stream to be exhausted immediately. We do it 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: # this way instead of using preload_content because 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: # preload_content will never buffer chunked responses 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: http_response.content 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: return http_response 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: except URLLib3SSLError as e: 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: raise SSLError(endpoint_url=request.url, error=e) 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: except (NewConnectionError, socket.gaierror) as e: 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout:> raise EndpointConnectionError(endpoint_url=request.url, error=e) 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout:E botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "http://vm00.local:80/nwonvyzfzxtohzgt-86?marker=&max-keys=1000&encoding-type=url" 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/httpsession.py:506: EndpointConnectionError 2026-04-01T10:15:41.127 INFO:teuthology.orchestra.run.vm00.stdout:----------------------------- Captured stderr call ----------------------------- 2026-04-01T10:15:41.330 INFO:teuthology.orchestra.run.vm00.stdout:ignoring --setuser ceph since I am not root 2026-04-01T10:15:41.330 INFO:teuthology.orchestra.run.vm00.stdout:ignoring --setgroup ceph since I am not root 2026-04-01T10:15:41.330 INFO:teuthology.orchestra.run.vm00.stdout:ignoring --setuser ceph since I am not root 2026-04-01T10:15:41.330 INFO:teuthology.orchestra.run.vm00.stdout:ignoring --setgroup ceph since I am not root 2026-04-01T10:15:41.330 INFO:teuthology.orchestra.run.vm00.stdout:Process Process-97: 2026-04-01T10:15:41.330 INFO:teuthology.orchestra.run.vm00.stdout:Traceback (most recent call last): 2026-04-01T10:15:41.330 INFO:teuthology.orchestra.run.vm00.stdout: File "/home/ubuntu/cephtest/ceph/src/test/rgw/dedup/.tox/py/lib/python3.9/site-packages/urllib3/connection.py", line 174, in _new_conn 2026-04-01T10:15:41.330 INFO:teuthology.orchestra.run.vm00.stdout: conn = connection.create_connection( 2026-04-01T10:15:41.330 INFO:teuthology.orchestra.run.vm00.stdout: File "/home/ubuntu/cephtest/ceph/src/test/rgw/dedup/.tox/py/lib/python3.9/site-packages/urllib3/util/connection.py", line 95, in create_connection 2026-04-01T10:15:41.330 INFO:teuthology.orchestra.run.vm00.stdout: raise err 2026-04-01T10:15:41.330 INFO:teuthology.orchestra.run.vm00.stdout: File "/home/ubuntu/cephtest/ceph/src/test/rgw/dedup/.tox/py/lib/python3.9/site-packages/urllib3/util/connection.py", line 85, in create_connection 2026-04-01T10:15:41.330 INFO:teuthology.orchestra.run.vm00.stdout: sock.connect(sa) 2026-04-01T10:15:41.330 INFO:teuthology.orchestra.run.vm00.stdout:ConnectionRefusedError: [Errno 111] Connection refused 2026-04-01T10:15:41.330 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.330 INFO:teuthology.orchestra.run.vm00.stdout:During handling of the above exception, another exception occurred: 2026-04-01T10:15:41.330 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.330 INFO:teuthology.orchestra.run.vm00.stdout:Traceback (most recent call last): 2026-04-01T10:15:41.330 INFO:teuthology.orchestra.run.vm00.stdout: File "/home/ubuntu/cephtest/ceph/src/test/rgw/dedup/.tox/py/lib/python3.9/site-packages/botocore/httpsession.py", line 477, in send 2026-04-01T10:15:41.330 INFO:teuthology.orchestra.run.vm00.stdout: urllib_response = conn.urlopen( 2026-04-01T10:15:41.330 INFO:teuthology.orchestra.run.vm00.stdout: File "/home/ubuntu/cephtest/ceph/src/test/rgw/dedup/.tox/py/lib/python3.9/site-packages/urllib3/connectionpool.py", line 802, in urlopen 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: retries = retries.increment( 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: File "/home/ubuntu/cephtest/ceph/src/test/rgw/dedup/.tox/py/lib/python3.9/site-packages/urllib3/util/retry.py", line 527, in increment 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: raise six.reraise(type(error), error, _stacktrace) 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: File "/home/ubuntu/cephtest/ceph/src/test/rgw/dedup/.tox/py/lib/python3.9/site-packages/urllib3/packages/six.py", line 770, in reraise 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: raise value 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: File "/home/ubuntu/cephtest/ceph/src/test/rgw/dedup/.tox/py/lib/python3.9/site-packages/urllib3/connectionpool.py", line 716, in urlopen 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: httplib_response = self._make_request( 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: File "/home/ubuntu/cephtest/ceph/src/test/rgw/dedup/.tox/py/lib/python3.9/site-packages/urllib3/connectionpool.py", line 416, in _make_request 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: conn.request(method, url, **httplib_request_kw) 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: File "/home/ubuntu/cephtest/ceph/src/test/rgw/dedup/.tox/py/lib/python3.9/site-packages/botocore/awsrequest.py", line 96, in request 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: rval = super().request(method, url, body, headers, *args, **kwargs) 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: File "/home/ubuntu/cephtest/ceph/src/test/rgw/dedup/.tox/py/lib/python3.9/site-packages/urllib3/connection.py", line 244, in request 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: super(HTTPConnection, self).request(method, url, body=body, headers=headers) 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: File "/usr/lib64/python3.9/http/client.py", line 1285, in request 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: self._send_request(method, url, body, headers, encode_chunked) 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: File "/usr/lib64/python3.9/http/client.py", line 1331, in _send_request 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: self.endheaders(body, encode_chunked=encode_chunked) 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: File "/usr/lib64/python3.9/http/client.py", line 1280, in endheaders 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: self._send_output(message_body, encode_chunked=encode_chunked) 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: File "/home/ubuntu/cephtest/ceph/src/test/rgw/dedup/.tox/py/lib/python3.9/site-packages/botocore/awsrequest.py", line 123, in _send_output 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: self.send(msg) 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: File "/home/ubuntu/cephtest/ceph/src/test/rgw/dedup/.tox/py/lib/python3.9/site-packages/botocore/awsrequest.py", line 223, in send 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: return super().send(str) 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: File "/usr/lib64/python3.9/http/client.py", line 980, in send 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: self.connect() 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: File "/home/ubuntu/cephtest/ceph/src/test/rgw/dedup/.tox/py/lib/python3.9/site-packages/urllib3/connection.py", line 205, in connect 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: conn = self._new_conn() 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: File "/home/ubuntu/cephtest/ceph/src/test/rgw/dedup/.tox/py/lib/python3.9/site-packages/urllib3/connection.py", line 186, in _new_conn 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: raise NewConnectionError( 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout:urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout:During handling of the above exception, another exception occurred: 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout:Traceback (most recent call last): 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: File "/usr/lib64/python3.9/multiprocessing/process.py", line 315, in _bootstrap 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: self.run() 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: File "/usr/lib64/python3.9/multiprocessing/process.py", line 108, in run 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: self._target(*self._args, **self._kwargs) 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: File "/home/ubuntu/cephtest/ceph/src/test/rgw/dedup/test_dedup.py 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout:------------------------------ Captured log call ------------------------------- 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout:WARNING dedup.test_dedup:test_dedup.py:2748 test_dedup_dry_large_scale: failed!! 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout:=============================== warnings summary =============================== 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small_with_tenants 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_multipart 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small_large_mix 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_basic_with_tenants 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_multipart_with_tenants 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small_multipart_with_tenants 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_large_scale_with_tenants 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_large_scale 2026-04-01T10:15:41.331 INFO:teuthology.orchestra.run.vm00.stdout: /home/ubuntu/cephtest/ceph/src/test/rgw/dedup/.tox/py/lib/python3.9/site-packages/boto3/compat.py:89: PythonDeprecationWarning: Boto3 will no longer support Python 3.9 starting April 29, 2026. To continue receiving service updates, bug fixes, and security updates please upgrade to Python 3.10 or later. More information can be found here: https://aws.amazon.com/blogs/developer/python-support-policy-updates-for-aws-sdks-and-tools/ 2026-04-01T10:15:41.332 INFO:teuthology.orchestra.run.vm00.stdout: warnings.warn(warning, PythonDeprecationWarning) 2026-04-01T10:15:41.332 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:15:41.332 INFO:teuthology.orchestra.run.vm00.stdout:-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 2026-04-01T10:15:41.332 INFO:teuthology.orchestra.run.vm00.stdout:=========================== short test summary info ============================ 2026-04-01T10:15:41.332 INFO:teuthology.orchestra.run.vm00.stdout:FAILED test_dedup.py::test_dedup_dry_large_scale - botocore.exceptions.Endpoi... 2026-04-01T10:15:41.332 INFO:teuthology.orchestra.run.vm00.stdout:============ 1 failed, 33 passed, 8 warnings in 1284.26s (0:21:24) ============= 2026-04-01T10:15:41.495 INFO:teuthology.orchestra.run.vm00.stdout:py: exit 1 (1284.72 seconds) /home/ubuntu/cephtest/ceph/src/test/rgw/dedup> pytest -v -m 'basic_test or request_test or example_test' pid=69653 2026-04-01T10:15:41.495 INFO:teuthology.orchestra.run.vm00.stdout: py: FAIL code 1 (1287.76=setup[3.04]+cmd[1284.72] seconds) 2026-04-01T10:15:41.495 INFO:teuthology.orchestra.run.vm00.stdout: evaluation failed :( (1287.77 seconds) 2026-04-01T10:15:41.518 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-04-01T10:15:41.518 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 30, in nested vars.append(enter()) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 135, in __enter__ return next(self.gen) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/dedup_tests.py", line 191, in run_tests toxvenv_sh(ctx, remote, args, label="dedup tests against rgw") File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/dedup_tests.py", line 165, in toxvenv_sh return remote.sh(['source', activate, run.Raw('&&')] + args, **kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/remote.py", line 97, in sh proc = self.run(**kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/remote.py", line 596, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed (dedup tests against rgw) on vm00 with status 1: "source /home/ubuntu/cephtest/tox-venv/bin/activate && cd /home/ubuntu/cephtest/ceph/src/test/rgw/dedup/ && DEDUPTESTS_CONF=./deduptests.client.0.conf tox -- -v -m 'basic_test or request_test or example_test'" 2026-04-01T10:15:41.519 INFO:tasks.dedup_tests:Removing dedup-tests.conf file... 2026-04-01T10:15:41.519 DEBUG:teuthology.orchestra.run.vm00:> rm -f /home/ubuntu/cephtest/ceph/src/test/rgw/dedup/deduptests.client.0.conf 2026-04-01T10:15:41.562 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph 2026-04-01T10:15:41.682 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-04-01T10:15:41.682 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-04-01T10:20:41.685 INFO:teuthology.orchestra.run.vm00.stderr:failed to fetch mon config (--no-mon-config to skip) 2026-04-01T10:20:41.686 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-04-01T10:20:41.687 INFO:tasks.dedup_tests:Removing dedup-tests... 2026-04-01T10:20:41.687 DEBUG:teuthology.orchestra.run.vm00:> rm -rf /home/ubuntu/cephtest/ceph 2026-04-01T10:20:42.284 ERROR:teuthology.run_tasks:Saw exception from tasks. Traceback (most recent call last): File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/dedup_tests.py", line 107, in create_users yield File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 30, in nested vars.append(enter()) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 135, in __enter__ return next(self.gen) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/dedup_tests.py", line 191, in run_tests toxvenv_sh(ctx, remote, args, label="dedup tests against rgw") File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/dedup_tests.py", line 165, in toxvenv_sh return remote.sh(['source', activate, run.Raw('&&')] + args, **kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/remote.py", line 97, in sh proc = self.run(**kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/remote.py", line 596, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed (dedup tests against rgw) on vm00 with status 1: "source /home/ubuntu/cephtest/tox-venv/bin/activate && cd /home/ubuntu/cephtest/ceph/src/test/rgw/dedup/ && DEDUPTESTS_CONF=./deduptests.client.0.conf tox -- -v -m 'basic_test or request_test or example_test'" During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/kshtsk/teuthology/teuthology/run_tasks.py", line 112, in run_tasks manager.__enter__() File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 135, in __enter__ return next(self.gen) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/dedup_tests.py", line 240, in task with contextutil.nested( File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 135, in __enter__ return next(self.gen) File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/dedup_tests.py", line 45, in download yield File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/dedup_tests.py", line 114, in create_users ctx.cluster.only(client).run( File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/cluster.py", line 85, in run procs = [remote.run(**kwargs, wait=_wait) for remote in remotes] File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/cluster.py", line 85, in procs = [remote.run(**kwargs, wait=_wait) for remote in remotes] File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/remote.py", line 596, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph' 2026-04-01T10:20:42.284 DEBUG:teuthology.run_tasks:Unwinding manager dedup-tests 2026-04-01T10:20:42.287 DEBUG:teuthology.run_tasks:Unwinding manager tox 2026-04-01T10:20:42.289 DEBUG:teuthology.orchestra.run.vm00:> rm -rf /home/ubuntu/cephtest/tox-venv 2026-04-01T10:20:42.373 DEBUG:teuthology.run_tasks:Unwinding manager tox 2026-04-01T10:20:42.375 DEBUG:teuthology.orchestra.run.vm00:> rm -rf /home/ubuntu/cephtest/tox-venv 2026-04-01T10:20:42.389 DEBUG:teuthology.run_tasks:Unwinding manager rgw 2026-04-01T10:20:42.391 DEBUG:tasks.rgw.client.0:waiting for process to exit 2026-04-01T10:20:42.391 INFO:teuthology.orchestra.run:waiting for 300 2026-04-01T10:20:42.391 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-04-01T10:20:42.391 ERROR:teuthology.orchestra.daemon.state:Error while waiting for process to exit Traceback (most recent call last): File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/daemon/state.py", line 146, in stop run.wait([self.proc], timeout=timeout) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term radosgw --rgw-frontends 'beast port=80' -n client.0 --cluster ceph -k /etc/ceph/ceph.client.0.keyring --log-file /var/log/ceph/rgw.ceph.client.0.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.0.sock --foreground | sudo tee /var/log/ceph/rgw.ceph.client.0.stdout 2>&1" 2026-04-01T10:20:42.391 INFO:tasks.rgw.client.0:Stopped 2026-04-01T10:20:42.391 DEBUG:teuthology.orchestra.run.vm00:> rm -f /home/ubuntu/cephtest/rgw.opslog.ceph.client.0.sock 2026-04-01T10:20:42.445 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/ceph/vault-root-token 2026-04-01T10:20:42.517 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /home/ubuntu/cephtest/url_file 2026-04-01T10:20:42.579 INFO:tasks.util.rgw:rgwadmin: client.0 : ['gc', 'process', '--include-all'] 2026-04-01T10:20:42.579 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.0', '--cluster', 'ceph', 'gc', 'process', '--include-all'] 2026-04-01T10:20:42.579 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all 2026-04-01T10:20:42.652 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-04-01T10:20:42.652 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-04-01T10:25:42.652 INFO:teuthology.orchestra.run.vm00.stderr:2026-04-01T10:25:42.653+0000 7f8667b3a900 0 monclient(hunting): authenticate timed out after 300 2026-04-01T10:25:42.653 INFO:teuthology.orchestra.run.vm00.stderr:failed to fetch mon config (--no-mon-config to skip) 2026-04-01T10:25:42.654 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-04-01T10:25:42.654 ERROR:teuthology.run_tasks:Manager failed: rgw Traceback (most recent call last): File "/home/teuthos/kshtsk/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/remote.py", line 596, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' 2026-04-01T10:25:42.655 DEBUG:teuthology.run_tasks:Unwinding manager openssl_keys 2026-04-01T10:25:42.657 DEBUG:teuthology.run_tasks:Unwinding manager ceph 2026-04-01T10:25:42.659 INFO:tasks.ceph.ceph_manager.ceph:waiting for clean 2026-04-01T10:25:42.659 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-04-01T10:27:42.724 DEBUG:teuthology.orchestra.run:got remote process result: 124 2026-04-01T10:27:42.724 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph.py", line 2001, in task yield File "/home/teuthos/kshtsk/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/remote.py", line 596, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph.py", line 2011, in task ctx.managers[config['cluster']].wait_for_clean() File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 2919, in wait_for_clean num_active_clean = self.get_num_active_clean() File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 2698, in get_num_active_clean pgs = self.get_pg_stats() File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 2464, in get_pg_stats out = self.raw_cluster_cmd('pg', 'dump', '--format=json') File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 1696, in raw_cluster_cmd return self.run_cluster_cmd(**kwargs).stdout.getvalue() File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 1687, in run_cluster_cmd return self.controller.run(**kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/remote.py", line 596, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' 2026-04-01T10:27:42.725 INFO:teuthology.misc:Shutting down mds daemons... 2026-04-01T10:27:42.725 INFO:teuthology.misc:Shutting down osd daemons... 2026-04-01T10:27:42.725 DEBUG:tasks.ceph.osd.0:waiting for process to exit 2026-04-01T10:27:42.725 INFO:teuthology.orchestra.run:waiting for 300 2026-04-01T10:27:42.725 INFO:tasks.ceph.osd.0:Stopped 2026-04-01T10:27:42.725 DEBUG:tasks.ceph.osd.1:waiting for process to exit 2026-04-01T10:27:42.725 INFO:teuthology.orchestra.run:waiting for 300 2026-04-01T10:27:42.725 INFO:tasks.ceph.osd.1:Stopped 2026-04-01T10:27:42.725 DEBUG:tasks.ceph.osd.2:waiting for process to exit 2026-04-01T10:27:42.725 INFO:teuthology.orchestra.run:waiting for 300 2026-04-01T10:27:42.726 INFO:tasks.ceph.osd.2:Stopped 2026-04-01T10:27:42.726 DEBUG:tasks.ceph.osd.3:waiting for process to exit 2026-04-01T10:27:42.726 INFO:teuthology.orchestra.run:waiting for 300 2026-04-01T10:27:42.726 INFO:tasks.ceph.osd.3:Stopped 2026-04-01T10:27:42.726 DEBUG:tasks.ceph.osd.4:waiting for process to exit 2026-04-01T10:27:42.726 INFO:teuthology.orchestra.run:waiting for 300 2026-04-01T10:27:42.726 INFO:tasks.ceph.osd.4:Stopped 2026-04-01T10:27:42.726 DEBUG:tasks.ceph.osd.5:waiting for process to exit 2026-04-01T10:27:42.726 INFO:teuthology.orchestra.run:waiting for 300 2026-04-01T10:27:42.726 INFO:tasks.ceph.osd.5:Stopped 2026-04-01T10:27:42.726 DEBUG:tasks.ceph.osd.6:waiting for process to exit 2026-04-01T10:27:42.726 INFO:teuthology.orchestra.run:waiting for 300 2026-04-01T10:27:42.726 INFO:tasks.ceph.osd.6:Stopped 2026-04-01T10:27:42.726 DEBUG:tasks.ceph.osd.7:waiting for process to exit 2026-04-01T10:27:42.726 INFO:teuthology.orchestra.run:waiting for 300 2026-04-01T10:27:42.726 INFO:tasks.ceph.osd.7:Stopped 2026-04-01T10:27:42.726 INFO:teuthology.misc:Shutting down mgr daemons... 2026-04-01T10:27:42.726 DEBUG:tasks.ceph.mgr.y:waiting for process to exit 2026-04-01T10:27:42.726 INFO:teuthology.orchestra.run:waiting for 300 2026-04-01T10:27:42.726 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-04-01T10:27:42.726 ERROR:teuthology.orchestra.daemon.state:Error while waiting for process to exit Traceback (most recent call last): File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph.py", line 2001, in task yield File "/home/teuthos/kshtsk/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/remote.py", line 596, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph.py", line 1526, in run_daemon yield File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph.py", line 2011, in task ctx.managers[config['cluster']].wait_for_clean() File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 2919, in wait_for_clean num_active_clean = self.get_num_active_clean() File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 2698, in get_num_active_clean pgs = self.get_pg_stats() File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 2464, in get_pg_stats out = self.raw_cluster_cmd('pg', 'dump', '--format=json') File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 1696, in raw_cluster_cmd return self.run_cluster_cmd(**kwargs).stdout.getvalue() File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 1687, in run_cluster_cmd return self.controller.run(**kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/remote.py", line 596, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/daemon/state.py", line 146, in stop run.wait([self.proc], timeout=timeout) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mgr -f --cluster ceph -i y' 2026-04-01T10:27:42.726 INFO:tasks.ceph.mgr.y:Stopped 2026-04-01T10:27:42.726 DEBUG:tasks.ceph.mgr.x:waiting for process to exit 2026-04-01T10:27:42.727 INFO:teuthology.orchestra.run:waiting for 300 2026-04-01T10:27:42.727 INFO:tasks.ceph.mgr.x:Stopped 2026-04-01T10:27:42.727 INFO:teuthology.misc:Shutting down mon daemons... 2026-04-01T10:27:42.727 DEBUG:tasks.ceph.mon.a:waiting for process to exit 2026-04-01T10:27:42.727 INFO:teuthology.orchestra.run:waiting for 300 2026-04-01T10:27:42.727 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-04-01T10:27:42.727 ERROR:teuthology.orchestra.daemon.state:Error while waiting for process to exit Traceback (most recent call last): File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph.py", line 2001, in task yield File "/home/teuthos/kshtsk/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/remote.py", line 596, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph.py", line 1526, in run_daemon yield File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph.py", line 2011, in task ctx.managers[config['cluster']].wait_for_clean() File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 2919, in wait_for_clean num_active_clean = self.get_num_active_clean() File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 2698, in get_num_active_clean pgs = self.get_pg_stats() File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 2464, in get_pg_stats out = self.raw_cluster_cmd('pg', 'dump', '--format=json') File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 1696, in raw_cluster_cmd return self.run_cluster_cmd(**kwargs).stdout.getvalue() File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 1687, in run_cluster_cmd return self.controller.run(**kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/remote.py", line 596, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/daemon/state.py", line 146, in stop run.wait([self.proc], timeout=timeout) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i a' 2026-04-01T10:27:42.727 INFO:tasks.ceph.mon.a:Stopped 2026-04-01T10:27:42.727 DEBUG:tasks.ceph.mon.c:waiting for process to exit 2026-04-01T10:27:42.727 INFO:teuthology.orchestra.run:waiting for 300 2026-04-01T10:27:42.727 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-04-01T10:27:42.727 ERROR:teuthology.orchestra.daemon.state:Error while waiting for process to exit Traceback (most recent call last): File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph.py", line 2001, in task yield File "/home/teuthos/kshtsk/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/remote.py", line 596, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph.py", line 1526, in run_daemon yield File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph.py", line 2011, in task ctx.managers[config['cluster']].wait_for_clean() File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 2919, in wait_for_clean num_active_clean = self.get_num_active_clean() File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 2698, in get_num_active_clean pgs = self.get_pg_stats() File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 2464, in get_pg_stats out = self.raw_cluster_cmd('pg', 'dump', '--format=json') File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 1696, in raw_cluster_cmd return self.run_cluster_cmd(**kwargs).stdout.getvalue() File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 1687, in run_cluster_cmd return self.controller.run(**kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/remote.py", line 596, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/daemon/state.py", line 146, in stop run.wait([self.proc], timeout=timeout) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i c' 2026-04-01T10:27:42.727 INFO:tasks.ceph.mon.c:Stopped 2026-04-01T10:27:42.727 DEBUG:tasks.ceph.mon.b:waiting for process to exit 2026-04-01T10:27:42.727 INFO:teuthology.orchestra.run:waiting for 300 2026-04-01T10:27:42.727 INFO:tasks.ceph.mon.b:Stopped 2026-04-01T10:27:42.727 INFO:tasks.ceph:Checking cluster log for badness... 2026-04-01T10:27:42.727 DEBUG:teuthology.orchestra.run.vm00:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/ceph.log | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v '\(PG_AVAILABILITY\)' | egrep -v '\(PG_DEGRADED\)' | egrep -v '\(POOL_APP_NOT_ENABLED\)' | egrep -v 'not have an application enabled' | head -n 1 2026-04-01T10:27:42.752 INFO:teuthology.orchestra.run.vm00.stdout:2026-04-01T10:04:47.974048+0000 mon.a (mon.0) 670 : cluster [ERR] Health check failed: mon c is very low on available space (MON_DISK_CRIT) 2026-04-01T10:27:42.752 WARNING:tasks.ceph:Found errors (ERR|WRN|SEC) in cluster log 2026-04-01T10:27:42.752 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-0 on ubuntu@vm00.local 2026-04-01T10:27:42.753 DEBUG:teuthology.orchestra.run.vm00:> sync && sudo umount -f /var/lib/ceph/osd/ceph-0 2026-04-01T10:27:42.876 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-1 on ubuntu@vm00.local 2026-04-01T10:27:42.876 DEBUG:teuthology.orchestra.run.vm00:> sync && sudo umount -f /var/lib/ceph/osd/ceph-1 2026-04-01T10:27:42.959 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-2 on ubuntu@vm00.local 2026-04-01T10:27:42.959 DEBUG:teuthology.orchestra.run.vm00:> sync && sudo umount -f /var/lib/ceph/osd/ceph-2 2026-04-01T10:27:43.041 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-3 on ubuntu@vm00.local 2026-04-01T10:27:43.041 DEBUG:teuthology.orchestra.run.vm00:> sync && sudo umount -f /var/lib/ceph/osd/ceph-3 2026-04-01T10:27:43.131 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-4 on ubuntu@vm03.local 2026-04-01T10:27:43.131 DEBUG:teuthology.orchestra.run.vm03:> sync && sudo umount -f /var/lib/ceph/osd/ceph-4 2026-04-01T10:27:43.243 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-5 on ubuntu@vm03.local 2026-04-01T10:27:43.243 DEBUG:teuthology.orchestra.run.vm03:> sync && sudo umount -f /var/lib/ceph/osd/ceph-5 2026-04-01T10:27:43.334 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-6 on ubuntu@vm03.local 2026-04-01T10:27:43.334 DEBUG:teuthology.orchestra.run.vm03:> sync && sudo umount -f /var/lib/ceph/osd/ceph-6 2026-04-01T10:27:43.435 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-7 on ubuntu@vm03.local 2026-04-01T10:27:43.435 DEBUG:teuthology.orchestra.run.vm03:> sync && sudo umount -f /var/lib/ceph/osd/ceph-7 2026-04-01T10:27:43.540 INFO:tasks.ceph:Archiving mon data... 2026-04-01T10:27:43.540 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/lib/ceph/mon/ceph-a to /archive/supriti-2026-04-01_09:45:36-rgw-wip-sse-s3-on-v20.2.0-none-default-vps/4721/data/mon.a.tgz 2026-04-01T10:27:43.540 DEBUG:teuthology.orchestra.run.vm00:> mktemp 2026-04-01T10:27:43.558 INFO:teuthology.orchestra.run.vm00.stdout:/tmp/tmp.Fm4B9J5JKk 2026-04-01T10:27:43.558 DEBUG:teuthology.orchestra.run.vm00:> sudo tar cz -f - -C /var/lib/ceph/mon/ceph-a -- . > /tmp/tmp.Fm4B9J5JKk 2026-04-01T10:27:43.699 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0666 /tmp/tmp.Fm4B9J5JKk 2026-04-01T10:27:43.782 DEBUG:teuthology.orchestra.remote:vm00:/tmp/tmp.Fm4B9J5JKk is 504KB 2026-04-01T10:27:43.840 DEBUG:teuthology.orchestra.run.vm00:> rm -fr /tmp/tmp.Fm4B9J5JKk 2026-04-01T10:27:43.857 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/lib/ceph/mon/ceph-c to /archive/supriti-2026-04-01_09:45:36-rgw-wip-sse-s3-on-v20.2.0-none-default-vps/4721/data/mon.c.tgz 2026-04-01T10:27:43.857 DEBUG:teuthology.orchestra.run.vm00:> mktemp 2026-04-01T10:27:43.915 INFO:teuthology.orchestra.run.vm00.stdout:/tmp/tmp.XFrmApRMDV 2026-04-01T10:27:43.915 DEBUG:teuthology.orchestra.run.vm00:> sudo tar cz -f - -C /var/lib/ceph/mon/ceph-c -- . > /tmp/tmp.XFrmApRMDV 2026-04-01T10:27:44.060 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0666 /tmp/tmp.XFrmApRMDV 2026-04-01T10:27:44.140 DEBUG:teuthology.orchestra.remote:vm00:/tmp/tmp.XFrmApRMDV is 527KB 2026-04-01T10:27:44.200 DEBUG:teuthology.orchestra.run.vm00:> rm -fr /tmp/tmp.XFrmApRMDV 2026-04-01T10:27:44.215 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/lib/ceph/mon/ceph-b to /archive/supriti-2026-04-01_09:45:36-rgw-wip-sse-s3-on-v20.2.0-none-default-vps/4721/data/mon.b.tgz 2026-04-01T10:27:44.215 DEBUG:teuthology.orchestra.run.vm03:> mktemp 2026-04-01T10:27:44.230 INFO:teuthology.orchestra.run.vm03.stdout:/tmp/tmp.Kqvu86QLI1 2026-04-01T10:27:44.230 DEBUG:teuthology.orchestra.run.vm03:> sudo tar cz -f - -C /var/lib/ceph/mon/ceph-b -- . > /tmp/tmp.Kqvu86QLI1 2026-04-01T10:27:44.349 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 0666 /tmp/tmp.Kqvu86QLI1 2026-04-01T10:27:44.432 DEBUG:teuthology.orchestra.remote:vm03:/tmp/tmp.Kqvu86QLI1 is 400KB 2026-04-01T10:27:44.493 DEBUG:teuthology.orchestra.run.vm03:> rm -fr /tmp/tmp.Kqvu86QLI1 2026-04-01T10:27:44.508 INFO:tasks.ceph:Cleaning ceph cluster... 2026-04-01T10:27:44.509 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -rf -- /etc/ceph/ceph.conf /etc/ceph/ceph.keyring /home/ubuntu/cephtest/ceph.data /home/ubuntu/cephtest/ceph.monmap /home/ubuntu/cephtest/../*.pid 2026-04-01T10:27:44.510 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -rf -- /etc/ceph/ceph.conf /etc/ceph/ceph.keyring /home/ubuntu/cephtest/ceph.data /home/ubuntu/cephtest/ceph.monmap /home/ubuntu/cephtest/../*.pid 2026-04-01T10:27:44.550 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -rf -- /etc/ceph/ceph.conf /etc/ceph/ceph.keyring /home/ubuntu/cephtest/ceph.data /home/ubuntu/cephtest/ceph.monmap /home/ubuntu/cephtest/../*.pid 2026-04-01T10:27:44.595 INFO:teuthology.util.scanner:summary_data or yaml_file is empty! 2026-04-01T10:27:44.632 INFO:teuthology.util.scanner:summary_data or yaml_file is empty! 2026-04-01T10:27:44.646 INFO:teuthology.util.scanner:summary_data or yaml_file is empty! 2026-04-01T10:27:44.647 INFO:tasks.ceph:Archiving crash dumps... 2026-04-01T10:27:44.647 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/lib/ceph/crash to /archive/supriti-2026-04-01_09:45:36-rgw-wip-sse-s3-on-v20.2.0-none-default-vps/4721/remote/vm00/crash 2026-04-01T10:27:44.647 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /var/lib/ceph/crash -- . 2026-04-01T10:27:44.672 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/lib/ceph/crash to /archive/supriti-2026-04-01_09:45:36-rgw-wip-sse-s3-on-v20.2.0-none-default-vps/4721/remote/vm03/crash 2026-04-01T10:27:44.672 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/lib/ceph/crash -- . 2026-04-01T10:27:44.697 DEBUG:teuthology.misc:Transferring archived files from vm07:/var/lib/ceph/crash to /archive/supriti-2026-04-01_09:45:36-rgw-wip-sse-s3-on-v20.2.0-none-default-vps/4721/remote/vm07/crash 2026-04-01T10:27:44.697 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /var/lib/ceph/crash -- . 2026-04-01T10:27:44.720 INFO:tasks.ceph:Compressing logs... 2026-04-01T10:27:44.720 DEBUG:teuthology.orchestra.run.vm00:> time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-04-01T10:27:44.722 DEBUG:teuthology.orchestra.run.vm03:> time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-04-01T10:27:44.740 DEBUG:teuthology.orchestra.run.vm07:> time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-04-01T10:27:44.747 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph.tmp-client.admin.55879.log 2026-04-01T10:27:44.747 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph.tmp-client.admin.55879.log: 0.0% -- replaced with /var/log/ceph/ceph.tmp-client.admin.55879.log.gz 2026-04-01T10:27:44.747 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.0.log 2026-04-01T10:27:44.747 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.1.log 2026-04-01T10:27:44.748 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-osd.0.log: gzip -5 --verbose -- /var/log/ceph/ceph-osd.2.log 2026-04-01T10:27:44.748 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/ceph-osd.3.log 2026-04-01T10:27:44.748 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-mon.a.log 2026-04-01T10:27:44.756 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-osd.2.log: /var/log/ceph/ceph-osd.3.log: gzip -5 --verbose -- /var/log/ceph/ceph-mon.c.log 2026-04-01T10:27:44.763 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62281.log 2026-04-01T10:27:44.765 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.4.log 2026-04-01T10:27:44.765 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.5.log 2026-04-01T10:27:44.766 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-osd.4.log: gzip -5 --verbose -- /var/log/ceph/ceph-osd.6.log 2026-04-01T10:27:44.766 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-osd.5.log: gzip -5 --verbose -- /var/log/ceph/ceph-osd.7.log 2026-04-01T10:27:44.766 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-osd.6.log: gzip -5 --verbose -- /var/log/ceph/ceph-mon.b.log 2026-04-01T10:27:44.767 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ceph.log 2026-04-01T10:27:44.767 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-mgr.x.log 2026-04-01T10:27:44.768 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.58844.log 2026-04-01T10:27:44.769 INFO:teuthology.orchestra.run.vm03.stderr: 86.9% -- replaced with /var/log/ceph/ceph.log.gz 2026-04-01T10:27:44.770 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-mgr.x.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.58892.log 2026-04-01T10:27:44.770 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-mon.c.log: gzip -5 --verbose -- /var/log/ceph/ceph.log 2026-04-01T10:27:44.771 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62281.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62281.log.gz 2026-04-01T10:27:44.781 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-mon.b.log: /var/log/ceph/ceph-osd.7.log: /var/log/ceph/ceph-client.admin.58844.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.58844.log.gz 2026-04-01T10:27:44.786 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.52011.log 2026-04-01T10:27:44.786 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.52059.log 2026-04-01T10:27:44.786 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.52107.log 2026-04-01T10:27:44.787 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/ceph-client.admin.52011.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.52011.log.gz 2026-04-01T10:27:44.787 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.52155.log 2026-04-01T10:27:44.787 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/ceph-client.admin.52059.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.52059.log.gz 2026-04-01T10:27:44.787 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.2.52178.log 2026-04-01T10:27:44.787 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/ceph-client.admin.52107.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.52107.log.gz 2026-04-01T10:27:44.787 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/ceph-client.admin.52155.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.2.52286.log 2026-04-01T10:27:44.787 INFO:teuthology.orchestra.run.vm07.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.52155.log.gz 2026-04-01T10:27:44.787 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.2.52389.log 2026-04-01T10:27:44.788 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/ceph-client.2.52178.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.2.52492.log 2026-04-01T10:27:44.788 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/ceph-client.2.52286.log: 83.0% -- replaced with /var/log/ceph/ceph-client.2.52178.log.gz 2026-04-01T10:27:44.788 INFO:teuthology.orchestra.run.vm07.stderr: 45.3% -- replaced with /var/log/ceph/ceph-client.2.52286.log.gz 2026-04-01T10:27:44.788 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/ceph-client.2.52389.log: 43.9% -- replaced with /var/log/ceph/ceph-client.2.52389.log.gz 2026-04-01T10:27:44.788 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.2.52595.log 2026-04-01T10:27:44.788 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/rgw.ceph.client.2.log 2026-04-01T10:27:44.789 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/ceph-client.2.52492.log: 44.2% -- replaced with /var/log/ceph/ceph-client.2.52492.log.gz 2026-04-01T10:27:44.789 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/ceph-client.2.52595.log: gzip -5 --verbose -- /var/log/ceph/ops-log-ceph-client.2.log 2026-04-01T10:27:44.789 INFO:teuthology.orchestra.run.vm07.stderr: 44.9% -- replaced with /var/log/ceph/ceph-client.2.52595.log.gz 2026-04-01T10:27:44.789 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/rgw.ceph.client.2.log: /var/log/ceph/ops-log-ceph-client.2.log: 35.3% -- replaced with /var/log/ceph/ops-log-ceph-client.2.log.gz 2026-04-01T10:27:44.790 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62376.log 2026-04-01T10:27:44.791 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ceph.audit.log 2026-04-01T10:27:44.792 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-client.admin.58892.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.58892.log.gz 2026-04-01T10:27:44.792 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph.log: 92.9% -- replaced with /var/log/ceph/ceph.log.gz 2026-04-01T10:27:44.796 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.58940.log 2026-04-01T10:27:44.806 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph.audit.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.58988.log 2026-04-01T10:27:44.807 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-client.admin.58940.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.58940.log.gz 2026-04-01T10:27:44.807 INFO:teuthology.orchestra.run.vm03.stderr: 90.5% -- replaced with /var/log/ceph/ceph.audit.log.gz 2026-04-01T10:27:44.809 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-mgr.y.log 2026-04-01T10:27:44.810 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62376.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62376.log.gz 2026-04-01T10:27:44.816 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.59036.log 2026-04-01T10:27:44.817 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-client.admin.58988.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.58988.log.gz 2026-04-01T10:27:44.818 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.59084.log 2026-04-01T10:27:44.823 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-client.admin.59036.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.59036.log.gz 2026-04-01T10:27:44.823 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.59132.log 2026-04-01T10:27:44.823 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph.audit.log 2026-04-01T10:27:44.832 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-client.admin.59084.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.59180.log 2026-04-01T10:27:44.832 INFO:teuthology.orchestra.run.vm03.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.59084.log.gz 2026-04-01T10:27:44.832 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62773.log 2026-04-01T10:27:44.832 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-client.admin.59132.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.59132.log.gz 2026-04-01T10:27:44.833 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-client.admin.59180.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.59180.log.gz 2026-04-01T10:27:44.833 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-mgr.y.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62661.log 2026-04-01T10:27:44.836 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph.audit.log: 94.4% -- replaced with /var/log/ceph/ceph.audit.log.gz 2026-04-01T10:27:44.836 INFO:teuthology.orchestra.run.vm00.stderr: 94.5% -- replaced with /var/log/ceph/ceph-mgr.y.log.gz 2026-04-01T10:27:44.841 INFO:teuthology.orchestra.run.vm07.stderr: 91.3% -- replaced with /var/log/ceph/rgw.ceph.client.2.log.gz 2026-04-01T10:27:44.842 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62821.log 2026-04-01T10:27:44.843 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-client.admin.62773.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62773.log.gz 2026-04-01T10:27:44.843 INFO:teuthology.orchestra.run.vm07.stderr: 2026-04-01T10:27:44.843 INFO:teuthology.orchestra.run.vm07.stderr:real 0m0.066s 2026-04-01T10:27:44.843 INFO:teuthology.orchestra.run.vm07.stderr:user 0m0.061s 2026-04-01T10:27:44.843 INFO:teuthology.orchestra.run.vm07.stderr:sys 0m0.016s 2026-04-01T10:27:44.847 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62869.log 2026-04-01T10:27:44.852 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.65888.log 2026-04-01T10:27:44.853 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62661.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62661.log.gz 2026-04-01T10:27:44.857 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-client.admin.62821.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62821.log.gz 2026-04-01T10:27:44.857 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62917.log 2026-04-01T10:27:44.858 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-client.admin.62869.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62869.log.gz 2026-04-01T10:27:44.860 INFO:teuthology.orchestra.run.vm03.stderr: 91.9% -- replaced with /var/log/ceph/ceph-mgr.x.log.gz 2026-04-01T10:27:44.867 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66110.log 2026-04-01T10:27:44.868 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.65888.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.65888.log.gz 2026-04-01T10:27:44.869 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.1.62942.log 2026-04-01T10:27:44.870 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-client.admin.62917.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62917.log.gz 2026-04-01T10:27:44.872 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66182.log 2026-04-01T10:27:44.881 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.1.63050.log 2026-04-01T10:27:44.882 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-client.1.62942.log: 83.3% -- replaced with /var/log/ceph/ceph-client.1.62942.log.gz 2026-04-01T10:27:44.882 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66110.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66110.log.gz 2026-04-01T10:27:44.882 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66209.log 2026-04-01T10:27:44.883 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66182.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66182.log.gz 2026-04-01T10:27:44.886 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.1.63153.log 2026-04-01T10:27:44.896 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-client.1.63050.log: 44.9% -- replaced with /var/log/ceph/ceph-client.1.63050.log.gz 2026-04-01T10:27:44.896 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.1.63256.log 2026-04-01T10:27:44.896 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66300.log 2026-04-01T10:27:44.897 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66209.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66209.log.gz 2026-04-01T10:27:44.897 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-client.1.63153.log: 43.9% -- replaced with /var/log/ceph/ceph-client.1.63153.log.gz 2026-04-01T10:27:44.906 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66350.log 2026-04-01T10:27:44.907 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66300.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66300.log.gz 2026-04-01T10:27:44.911 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.1.63359.log 2026-04-01T10:27:44.911 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66400.log 2026-04-01T10:27:44.912 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-client.1.63256.log: 45.3% -- replaced with /var/log/ceph/ceph-client.1.63256.log.gz 2026-04-01T10:27:44.921 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66350.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66350.log.gz 2026-04-01T10:27:44.921 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/rgw.ceph.client.1.log 2026-04-01T10:27:44.921 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66450.log 2026-04-01T10:27:44.922 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66400.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66400.log.gz 2026-04-01T10:27:44.922 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-client.1.63359.log: 44.6% -- replaced with /var/log/ceph/ceph-client.1.63359.log.gz 2026-04-01T10:27:44.926 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ops-log-ceph-client.1.log 2026-04-01T10:27:44.927 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66682.log 2026-04-01T10:27:44.928 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66450.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66450.log.gz 2026-04-01T10:27:44.937 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/rgw.ceph.client.1.log: /var/log/ceph/ops-log-ceph-client.1.log: 35.3% -- replaced with /var/log/ceph/ops-log-ceph-client.1.log.gz 2026-04-01T10:27:44.937 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66694.log 2026-04-01T10:27:44.938 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66682.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66682.log.gz 2026-04-01T10:27:44.944 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66691.log 2026-04-01T10:27:44.945 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66694.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66694.log.gz 2026-04-01T10:27:44.954 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66688.log 2026-04-01T10:27:44.955 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66691.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66691.log.gz 2026-04-01T10:27:44.960 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66693.log 2026-04-01T10:27:44.961 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66688.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66688.log.gz 2026-04-01T10:27:44.970 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66677.log 2026-04-01T10:27:44.971 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66693.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66693.log.gz 2026-04-01T10:27:44.975 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66696.log 2026-04-01T10:27:44.985 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66677.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66677.log.gz 2026-04-01T10:27:44.985 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66675.log 2026-04-01T10:27:44.986 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66696.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66696.log.gz 2026-04-01T10:27:44.999 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66942.log 2026-04-01T10:27:45.000 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66675.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66675.log.gz 2026-04-01T10:27:45.009 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67041.log 2026-04-01T10:27:45.010 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66942.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66942.log.gz 2026-04-01T10:27:45.014 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67039.log 2026-04-01T10:27:45.024 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67041.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67041.log.gz 2026-04-01T10:27:45.024 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66981.log 2026-04-01T10:27:45.025 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67039.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67039.log.gz 2026-04-01T10:27:45.029 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67013.log 2026-04-01T10:27:45.039 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66981.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66981.log.gz 2026-04-01T10:27:45.040 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67032.log 2026-04-01T10:27:45.040 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67013.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67013.log.gz 2026-04-01T10:27:45.044 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67208.log 2026-04-01T10:27:45.054 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67032.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67032.log.gz 2026-04-01T10:27:45.054 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67198.log 2026-04-01T10:27:45.055 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67208.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67208.log.gz 2026-04-01T10:27:45.059 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67300.log 2026-04-01T10:27:45.069 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67198.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67198.log.gz 2026-04-01T10:27:45.069 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67348.log 2026-04-01T10:27:45.070 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67300.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67300.log.gz 2026-04-01T10:27:45.074 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67436.log 2026-04-01T10:27:45.084 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67348.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67348.log.gz 2026-04-01T10:27:45.084 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67426.log 2026-04-01T10:27:45.085 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67436.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67436.log.gz 2026-04-01T10:27:45.089 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67448.log 2026-04-01T10:27:45.093 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67426.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67426.log.gz 2026-04-01T10:27:45.099 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67511.log 2026-04-01T10:27:45.100 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67448.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67448.log.gz 2026-04-01T10:27:45.101 INFO:teuthology.orchestra.run.vm00.stderr: 92.5% -- replaced with /var/log/ceph/ceph-mon.c.log.gz 2026-04-01T10:27:45.116 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67509.log 2026-04-01T10:27:45.117 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67511.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67511.log.gz 2026-04-01T10:27:45.130 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67618.log 2026-04-01T10:27:45.131 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67509.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67509.log.gz 2026-04-01T10:27:45.144 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67702.log 2026-04-01T10:27:45.145 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67618.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67618.log.gz 2026-04-01T10:27:45.152 INFO:teuthology.orchestra.run.vm03.stderr: 92.3% -- replaced with /var/log/ceph/ceph-mon.b.log.gz 2026-04-01T10:27:45.158 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67752.log 2026-04-01T10:27:45.159 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67702.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67702.log.gz 2026-04-01T10:27:45.165 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67802.log 2026-04-01T10:27:45.166 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67752.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67752.log.gz 2026-04-01T10:27:45.172 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67852.log 2026-04-01T10:27:45.173 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67802.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67802.log.gz 2026-04-01T10:27:45.181 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67900.log 2026-04-01T10:27:45.182 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67852.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67852.log.gz 2026-04-01T10:27:45.189 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67950.log 2026-04-01T10:27:45.195 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67900.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67998.log 2026-04-01T10:27:45.196 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67950.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67950.log.gz 2026-04-01T10:27:45.198 INFO:teuthology.orchestra.run.vm00.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67900.log.gz 2026-04-01T10:27:45.205 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.68048.log 2026-04-01T10:27:45.206 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67998.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67998.log.gz 2026-04-01T10:27:45.214 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.0.68071.log 2026-04-01T10:27:45.215 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.68048.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.68048.log.gz 2026-04-01T10:27:45.221 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.0.68187.log 2026-04-01T10:27:45.228 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.68071.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.0.68290.log 2026-04-01T10:27:45.232 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.68187.log: 95.1% -- replaced with /var/log/ceph/ceph-client.0.68071.log.gz 2026-04-01T10:27:45.238 INFO:teuthology.orchestra.run.vm00.stderr: 43.9% -- replaced with /var/log/ceph/ceph-client.0.68187.log.gz 2026-04-01T10:27:45.238 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.0.68393.log 2026-04-01T10:27:45.239 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.68290.log: 45.3% -- replaced with /var/log/ceph/ceph-client.0.68290.log.gz 2026-04-01T10:27:45.245 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.0.68496.log 2026-04-01T10:27:45.252 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.68393.log: gzip -5 --verbose -- /var/log/ceph/rgw.ceph.client.0.log 2026-04-01T10:27:45.254 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.68496.log: 44.2% -- replaced with /var/log/ceph/ceph-client.0.68393.log.gz 2026-04-01T10:27:45.262 INFO:teuthology.orchestra.run.vm00.stderr: 43.5% -- replaced with /var/log/ceph/ceph-client.0.68496.log.gz 2026-04-01T10:27:45.262 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ops-log-ceph-client.0.log 2026-04-01T10:27:45.268 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/rgw.ceph.client.0.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.0.69479.log 2026-04-01T10:27:45.277 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ops-log-ceph-client.0.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69671.log 2026-04-01T10:27:45.278 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.69479.log: 84.4% -- replaced with /var/log/ceph/ceph-client.0.69479.log.gz 2026-04-01T10:27:45.291 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69812.log 2026-04-01T10:27:45.292 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69671.log: 83.5% -- replaced with /var/log/ceph/ceph-client.admin.69671.log.gz 2026-04-01T10:27:45.308 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69846.log 2026-04-01T10:27:45.308 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69812.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.69812.log.gz 2026-04-01T10:27:45.326 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69880.log 2026-04-01T10:27:45.326 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69846.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.69846.log.gz 2026-04-01T10:27:45.340 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69977.log 2026-04-01T10:27:45.341 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69880.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.69880.log.gz 2026-04-01T10:27:45.355 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70074.log 2026-04-01T10:27:45.356 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69977.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.69977.log.gz 2026-04-01T10:27:45.370 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70259.log 2026-04-01T10:27:45.371 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70074.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.70074.log.gz 2026-04-01T10:27:45.385 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70293.log 2026-04-01T10:27:45.386 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70259.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.70259.log.gz 2026-04-01T10:27:45.399 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70327.log 2026-04-01T10:27:45.400 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70293.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.70293.log.gz 2026-04-01T10:27:45.414 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70438.log 2026-04-01T10:27:45.421 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70327.log: 82.5% -- replaced with /var/log/ceph/ceph-client.admin.70327.log.gz 2026-04-01T10:27:45.431 INFO:teuthology.orchestra.run.vm03.stderr: 93.6% -- replaced with /var/log/ceph/rgw.ceph.client.1.log.gz 2026-04-01T10:27:45.436 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70536.log 2026-04-01T10:27:45.437 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70438.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.70438.log.gz 2026-04-01T10:27:45.451 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70633.log 2026-04-01T10:27:45.452 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70536.log: 93.2% -- replaced with /var/log/ceph/ceph-client.admin.70536.log.gz 2026-04-01T10:27:45.466 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70667.log 2026-04-01T10:27:45.467 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70633.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.70633.log.gz 2026-04-01T10:27:45.483 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70703.log 2026-04-01T10:27:45.483 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70667.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.70667.log.gz 2026-04-01T10:27:45.498 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70737.log 2026-04-01T10:27:45.498 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70703.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.70703.log.gz 2026-04-01T10:27:45.511 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71751.log 2026-04-01T10:27:45.512 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70737.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.70737.log.gz 2026-04-01T10:27:45.531 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71785.log 2026-04-01T10:27:45.531 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71751.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.71751.log.gz 2026-04-01T10:27:45.547 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71819.log 2026-04-01T10:27:45.548 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71785.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.71785.log.gz 2026-04-01T10:27:45.565 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71922.log 2026-04-01T10:27:45.574 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71819.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72020.log 2026-04-01T10:27:45.575 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71922.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.71922.log.gz 2026-04-01T10:27:45.575 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72117.log 2026-04-01T10:27:45.575 INFO:teuthology.orchestra.run.vm00.stderr: 82.7% -- replaced with /var/log/ceph/ceph-client.admin.71819.log.gz 2026-04-01T10:27:45.576 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72020.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72151.log 2026-04-01T10:27:45.576 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72117.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.72117.log.gz 2026-04-01T10:27:45.594 INFO:teuthology.orchestra.run.vm00.stderr: 96.8% -- replaced with /var/log/ceph/ceph-client.admin.72020.log.gz 2026-04-01T10:27:45.594 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72186.log 2026-04-01T10:27:45.595 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72151.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.72151.log.gz 2026-04-01T10:27:45.599 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72220.log 2026-04-01T10:27:45.600 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72186.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.72186.log.gz 2026-04-01T10:27:45.614 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72342.log 2026-04-01T10:27:45.614 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72220.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.72220.log.gz 2026-04-01T10:27:45.631 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72376.log 2026-04-01T10:27:45.631 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72342.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.72342.log.gz 2026-04-01T10:27:45.631 INFO:teuthology.orchestra.run.vm00.stderr: 91.3% -- replaced with /var/log/ceph/ceph-mon.a.log.gz 2026-04-01T10:27:45.635 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72410.log 2026-04-01T10:27:45.636 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72376.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.72376.log.gz 2026-04-01T10:27:45.644 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72513.log 2026-04-01T10:27:45.645 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72410.log: 83.0% -- replaced with /var/log/ceph/ceph-client.admin.72410.log.gz 2026-04-01T10:27:45.658 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72611.log 2026-04-01T10:27:45.659 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72513.log: 84.8% -- replaced with /var/log/ceph/ceph-client.admin.72513.log.gz 2026-04-01T10:27:45.664 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72708.log 2026-04-01T10:27:45.674 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72611.log: 92.8% -- replaced with /var/log/ceph/ceph-client.admin.72611.log.gz 2026-04-01T10:27:45.677 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72742.log 2026-04-01T10:27:45.678 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72708.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.72708.log.gz 2026-04-01T10:27:45.689 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72778.log 2026-04-01T10:27:45.689 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72742.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.72742.log.gz 2026-04-01T10:27:45.693 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72812.log 2026-04-01T10:27:45.703 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72778.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.72778.log.gz 2026-04-01T10:27:45.704 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73022.log 2026-04-01T10:27:45.704 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72812.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.72812.log.gz 2026-04-01T10:27:45.709 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73056.log 2026-04-01T10:27:45.710 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73022.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.73022.log.gz 2026-04-01T10:27:45.719 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73090.log 2026-04-01T10:27:45.719 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73056.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.73056.log.gz 2026-04-01T10:27:45.728 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73193.log 2026-04-01T10:27:45.728 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73090.log: 83.0% -- replaced with /var/log/ceph/ceph-client.admin.73090.log.gz 2026-04-01T10:27:45.733 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73291.log 2026-04-01T10:27:45.742 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73193.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73388.log 2026-04-01T10:27:45.742 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73291.log: 84.8% -- replaced with /var/log/ceph/ceph-client.admin.73193.log.gz 2026-04-01T10:27:45.742 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73422.log 2026-04-01T10:27:45.742 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73388.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.73388.log.gz 2026-04-01T10:27:45.747 INFO:teuthology.orchestra.run.vm00.stderr: 89.7% -- replaced with /var/log/ceph/ceph-client.admin.73291.log.gz 2026-04-01T10:27:45.751 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73458.log 2026-04-01T10:27:45.752 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73422.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.73422.log.gz 2026-04-01T10:27:45.758 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73492.log 2026-04-01T10:27:45.759 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73458.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.73458.log.gz 2026-04-01T10:27:45.766 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73604.log 2026-04-01T10:27:45.767 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73492.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.73492.log.gz 2026-04-01T10:27:45.774 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73638.log 2026-04-01T10:27:45.775 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73604.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.73604.log.gz 2026-04-01T10:27:45.781 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73672.log 2026-04-01T10:27:45.782 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73638.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.73638.log.gz 2026-04-01T10:27:45.788 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73775.log 2026-04-01T10:27:45.796 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73672.log: 83.0% -- replaced with /var/log/ceph/ceph-client.admin.73672.log.gz 2026-04-01T10:27:45.798 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73873.log 2026-04-01T10:27:45.799 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73775.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.73775.log.gz 2026-04-01T10:27:45.811 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73970.log 2026-04-01T10:27:45.812 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73873.log: 93.3% -- replaced with /var/log/ceph/ceph-client.admin.73873.log.gz 2026-04-01T10:27:45.816 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74004.log 2026-04-01T10:27:45.820 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73970.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.73970.log.gz 2026-04-01T10:27:45.825 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74040.log 2026-04-01T10:27:45.826 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74004.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.74004.log.gz 2026-04-01T10:27:45.834 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74074.log 2026-04-01T10:27:45.835 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74040.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.74040.log.gz 2026-04-01T10:27:45.839 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74876.log 2026-04-01T10:27:45.849 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74074.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.74074.log.gz 2026-04-01T10:27:45.850 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74910.log 2026-04-01T10:27:45.851 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74876.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.74876.log.gz 2026-04-01T10:27:45.860 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74944.log 2026-04-01T10:27:45.861 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74910.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.74910.log.gz 2026-04-01T10:27:45.870 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75047.log 2026-04-01T10:27:45.871 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74944.log: 82.7% -- replaced with /var/log/ceph/ceph-client.admin.74944.log.gz 2026-04-01T10:27:45.875 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75145.log 2026-04-01T10:27:45.880 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75047.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75242.log 2026-04-01T10:27:45.883 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75145.log: 90.1% -- replaced with /var/log/ceph/ceph-client.admin.75145.log.gz 2026-04-01T10:27:45.885 INFO:teuthology.orchestra.run.vm00.stderr: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.75047.log.gz 2026-04-01T10:27:45.887 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75276.log 2026-04-01T10:27:45.888 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75242.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.75242.log.gz 2026-04-01T10:27:45.902 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75312.log 2026-04-01T10:27:45.902 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75276.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.75276.log.gz 2026-04-01T10:27:45.906 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75346.log 2026-04-01T10:27:45.907 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75312.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.75312.log.gz 2026-04-01T10:27:45.918 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75380.log 2026-04-01T10:27:45.918 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75346.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.75346.log.gz 2026-04-01T10:27:45.923 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75477.log 2026-04-01T10:27:45.924 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75380.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.75380.log.gz 2026-04-01T10:27:45.932 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75574.log 2026-04-01T10:27:45.933 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75477.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.75477.log.gz 2026-04-01T10:27:45.937 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75971.log 2026-04-01T10:27:45.938 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75574.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.75574.log.gz 2026-04-01T10:27:45.948 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76005.log 2026-04-01T10:27:45.949 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75971.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.75971.log.gz 2026-04-01T10:27:45.954 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76039.log 2026-04-01T10:27:45.955 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76005.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.76005.log.gz 2026-04-01T10:27:45.963 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76142.log 2026-04-01T10:27:45.964 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76039.log: 83.2% -- replaced with /var/log/ceph/ceph-client.admin.76039.log.gz 2026-04-01T10:27:45.968 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76240.log 2026-04-01T10:27:45.969 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76142.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.76142.log.gz 2026-04-01T10:27:45.979 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76337.log 2026-04-01T10:27:45.981 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76240.log: 94.2% -- replaced with /var/log/ceph/ceph-client.admin.76240.log.gz 2026-04-01T10:27:45.984 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76371.log 2026-04-01T10:27:45.985 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76337.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.76337.log.gz 2026-04-01T10:27:45.995 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76406.log 2026-04-01T10:27:45.995 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76371.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.76371.log.gz 2026-04-01T10:27:46.000 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76440.log 2026-04-01T10:27:46.001 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76406.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.76406.log.gz 2026-04-01T10:27:46.014 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76474.log 2026-04-01T10:27:46.014 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76440.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.76440.log.gz 2026-04-01T10:27:46.020 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76571.log 2026-04-01T10:27:46.023 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76474.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.76474.log.gz 2026-04-01T10:27:46.029 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76668.log 2026-04-01T10:27:46.030 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76571.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.76571.log.gz 2026-04-01T10:27:46.035 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77430.log 2026-04-01T10:27:46.036 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76668.log: 84.8% -- replaced with /var/log/ceph/ceph-client.admin.76668.log.gz 2026-04-01T10:27:46.046 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77464.log 2026-04-01T10:27:46.047 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77430.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.77430.log.gz 2026-04-01T10:27:46.051 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77498.log 2026-04-01T10:27:46.052 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77464.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.77464.log.gz 2026-04-01T10:27:46.060 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77601.log 2026-04-01T10:27:46.061 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77498.log: 82.9% -- replaced with /var/log/ceph/ceph-client.admin.77498.log.gz 2026-04-01T10:27:46.067 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77699.log 2026-04-01T10:27:46.069 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77601.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.77601.log.gz 2026-04-01T10:27:46.079 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77796.log 2026-04-01T10:27:46.083 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77699.log: 96.5% -- replaced with /var/log/ceph/ceph-client.admin.77699.log.gz 2026-04-01T10:27:46.084 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77830.log 2026-04-01T10:27:46.086 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77796.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.77796.log.gz 2026-04-01T10:27:46.097 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77866.log 2026-04-01T10:27:46.098 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77830.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.77830.log.gz 2026-04-01T10:27:46.103 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77900.log 2026-04-01T10:27:46.104 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77866.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.77866.log.gz 2026-04-01T10:27:46.113 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77934.log 2026-04-01T10:27:46.114 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77900.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.77900.log.gz 2026-04-01T10:27:46.118 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78031.log 2026-04-01T10:27:46.121 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77934.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.77934.log.gz 2026-04-01T10:27:46.131 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78128.log 2026-04-01T10:27:46.132 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78031.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.78031.log.gz 2026-04-01T10:27:46.138 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78225.log 2026-04-01T10:27:46.142 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78128.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.78128.log.gz 2026-04-01T10:27:46.152 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78501.log 2026-04-01T10:27:46.153 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78225.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.78225.log.gz 2026-04-01T10:27:46.158 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78535.log 2026-04-01T10:27:46.158 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78501.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.78501.log.gz 2026-04-01T10:27:46.168 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78569.log 2026-04-01T10:27:46.169 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78535.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.78535.log.gz 2026-04-01T10:27:46.174 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78672.log 2026-04-01T10:27:46.177 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78569.log: 83.1% -- replaced with /var/log/ceph/ceph-client.admin.78569.log.gz 2026-04-01T10:27:46.181 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78770.log 2026-04-01T10:27:46.187 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78672.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.78672.log.gz 2026-04-01T10:27:46.189 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78867.log 2026-04-01T10:27:46.190 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78770.log: 89.3% -- replaced with /var/log/ceph/ceph-client.admin.78770.log.gz 2026-04-01T10:27:46.195 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78901.log 2026-04-01T10:27:46.196 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78867.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.78867.log.gz 2026-04-01T10:27:46.202 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78937.log 2026-04-01T10:27:46.202 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78901.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.78901.log.gz 2026-04-01T10:27:46.214 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78971.log 2026-04-01T10:27:46.215 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78937.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.78937.log.gz 2026-04-01T10:27:46.219 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79005.log 2026-04-01T10:27:46.220 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78971.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.78971.log.gz 2026-04-01T10:27:46.230 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79102.log 2026-04-01T10:27:46.231 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79005.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.79005.log.gz 2026-04-01T10:27:46.235 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79199.log 2026-04-01T10:27:46.236 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79102.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.79102.log.gz 2026-04-01T10:27:46.244 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79296.log 2026-04-01T10:27:46.245 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79199.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.79199.log.gz 2026-04-01T10:27:46.249 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79393.log 2026-04-01T10:27:46.251 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79296.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.79296.log.gz 2026-04-01T10:27:46.261 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79490.log 2026-04-01T10:27:46.262 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79393.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.79393.log.gz 2026-04-01T10:27:46.266 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79587.log 2026-04-01T10:27:46.267 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79490.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.79490.log.gz 2026-04-01T10:27:46.277 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79684.log 2026-04-01T10:27:46.278 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79587.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.79587.log.gz 2026-04-01T10:27:46.282 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79781.log 2026-04-01T10:27:46.285 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79684.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.79684.log.gz 2026-04-01T10:27:46.289 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79878.log 2026-04-01T10:27:46.296 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79781.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79975.log 2026-04-01T10:27:46.297 INFO:teuthology.orchestra.run.vm00.stderr: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.79781.log.gz 2026-04-01T10:27:46.297 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79878.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.79878.log.gz 2026-04-01T10:27:46.301 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.80072.log 2026-04-01T10:27:46.303 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79975.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.79975.log.gz 2026-04-01T10:27:46.313 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.80169.log 2026-04-01T10:27:46.314 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.80072.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.80072.log.gz 2026-04-01T10:27:46.318 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.80266.log 2026-04-01T10:27:46.319 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.80169.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.80169.log.gz 2026-04-01T10:27:46.330 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.80363.log 2026-04-01T10:27:46.331 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.80266.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.80266.log.gz 2026-04-01T10:27:46.336 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.80460.log 2026-04-01T10:27:46.337 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.80363.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.80363.log.gz 2026-04-01T10:27:46.350 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.80557.log 2026-04-01T10:27:46.351 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.80460.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.80460.log.gz 2026-04-01T10:27:46.355 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.80654.log 2026-04-01T10:27:46.356 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.80557.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.80557.log.gz 2026-04-01T10:27:46.367 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.80751.log 2026-04-01T10:27:46.368 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.80654.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.80654.log.gz 2026-04-01T10:27:46.375 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.80848.log 2026-04-01T10:27:46.377 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.80751.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.80751.log.gz 2026-04-01T10:27:46.387 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.80945.log 2026-04-01T10:27:46.388 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.80848.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.80848.log.gz 2026-04-01T10:27:46.400 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.81042.log 2026-04-01T10:27:46.401 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.80945.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.80945.log.gz 2026-04-01T10:27:46.406 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.81139.log 2026-04-01T10:27:46.408 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.81042.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.81042.log.gz 2026-04-01T10:27:46.414 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.81236.log 2026-04-01T10:27:46.420 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.81139.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.81333.log 2026-04-01T10:27:46.421 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.81236.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.81139.log.gz 2026-04-01T10:27:46.421 INFO:teuthology.orchestra.run.vm00.stderr: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.81236.log.gz 2026-04-01T10:27:46.438 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.81430.log 2026-04-01T10:27:46.439 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.81333.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.81333.log.gz 2026-04-01T10:27:46.444 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.81527.log 2026-04-01T10:27:46.446 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.81430.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.81430.log.gz 2026-04-01T10:27:46.455 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.81624.log 2026-04-01T10:27:46.456 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.81527.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.81527.log.gz 2026-04-01T10:27:46.460 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.81721.log 2026-04-01T10:27:46.463 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.81624.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.81624.log.gz 2026-04-01T10:27:46.471 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.81818.log 2026-04-01T10:27:46.472 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.81721.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.81721.log.gz 2026-04-01T10:27:46.477 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.81915.log 2026-04-01T10:27:46.480 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.81818.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.81818.log.gz 2026-04-01T10:27:46.485 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.82012.log 2026-04-01T10:27:46.491 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.81915.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.82109.log 2026-04-01T10:27:46.492 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.82012.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.81915.log.gz 2026-04-01T10:27:46.492 INFO:teuthology.orchestra.run.vm00.stderr: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.82012.log.gz 2026-04-01T10:27:46.511 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.82206.log 2026-04-01T10:27:46.517 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.82109.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.82303.log 2026-04-01T10:27:46.518 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.82206.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.82109.log.gz 2026-04-01T10:27:46.518 INFO:teuthology.orchestra.run.vm00.stderr: 85.4% -- replaced with /var/log/ceph/ceph-client.admin.82206.log.gz 2026-04-01T10:27:46.533 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.82400.log 2026-04-01T10:27:46.534 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.82303.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.82303.log.gz 2026-04-01T10:27:46.538 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.82497.log 2026-04-01T10:27:46.542 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.82400.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.82400.log.gz 2026-04-01T10:27:46.548 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.82594.log 2026-04-01T10:27:46.549 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.82497.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.82497.log.gz 2026-04-01T10:27:46.553 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.82691.log 2026-04-01T10:27:46.558 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.82594.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.82594.log.gz 2026-04-01T10:27:46.564 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.82788.log 2026-04-01T10:27:46.565 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.82691.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.82691.log.gz 2026-04-01T10:27:46.569 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.82885.log 2026-04-01T10:27:46.572 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.82788.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.82788.log.gz 2026-04-01T10:27:46.580 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.82982.log 2026-04-01T10:27:46.581 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.82885.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.82885.log.gz 2026-04-01T10:27:46.585 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.83079.log 2026-04-01T10:27:46.588 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.82982.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.82982.log.gz 2026-04-01T10:27:46.596 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.83176.log 2026-04-01T10:27:46.597 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.83079.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.83079.log.gz 2026-04-01T10:27:46.601 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.83273.log 2026-04-01T10:27:46.605 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.83176.log: 85.3% -- replaced with /var/log/ceph/ceph-client.admin.83176.log.gz 2026-04-01T10:27:46.611 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.83370.log 2026-04-01T10:27:46.612 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.83273.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.83273.log.gz 2026-04-01T10:27:46.618 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.83467.log 2026-04-01T10:27:46.620 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.83370.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.83370.log.gz 2026-04-01T10:27:46.626 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.83564.log 2026-04-01T10:27:46.632 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.83467.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.83661.log 2026-04-01T10:27:46.633 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.83564.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.83467.log.gz 2026-04-01T10:27:46.633 INFO:teuthology.orchestra.run.vm00.stderr: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.83564.log.gz 2026-04-01T10:27:46.648 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.83758.log 2026-04-01T10:27:46.649 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.83661.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.83661.log.gz 2026-04-01T10:27:46.654 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.83855.log 2026-04-01T10:27:46.656 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.83758.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.83758.log.gz 2026-04-01T10:27:46.663 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.83952.log 2026-04-01T10:27:46.664 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.83855.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.83855.log.gz 2026-04-01T10:27:46.668 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.84049.log 2026-04-01T10:27:46.673 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.83952.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.83952.log.gz 2026-04-01T10:27:46.678 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.84146.log 2026-04-01T10:27:46.679 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.84049.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.84049.log.gz 2026-04-01T10:27:46.685 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.84243.log 2026-04-01T10:27:46.688 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.84146.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.84146.log.gz 2026-04-01T10:27:46.696 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.84340.log 2026-04-01T10:27:46.697 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.84243.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.84243.log.gz 2026-04-01T10:27:46.702 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.84437.log 2026-04-01T10:27:46.705 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.84340.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.84340.log.gz 2026-04-01T10:27:46.710 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.84534.log 2026-04-01T10:27:46.717 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.84437.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.84437.log.gz 2026-04-01T10:27:46.717 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.84631.log 2026-04-01T10:27:46.718 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.84534.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.84534.log.gz 2026-04-01T10:27:46.722 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.84728.log 2026-04-01T10:27:46.728 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.84631.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.84825.log 2026-04-01T10:27:46.729 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.84728.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.84728.log.gz 2026-04-01T10:27:46.732 INFO:teuthology.orchestra.run.vm00.stderr: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.84631.log.gz 2026-04-01T10:27:46.747 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.84922.log 2026-04-01T10:27:46.748 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.84825.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.84825.log.gz 2026-04-01T10:27:46.753 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.85019.log 2026-04-01T10:27:46.756 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.84922.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.84922.log.gz 2026-04-01T10:27:46.766 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.85116.log 2026-04-01T10:27:46.767 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.85019.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.85019.log.gz 2026-04-01T10:27:46.771 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.325169.log 2026-04-01T10:27:46.776 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.85116.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.85116.log.gz 2026-04-01T10:27:46.781 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.325203.log 2026-04-01T10:27:46.782 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.325169.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.325169.log.gz 2026-04-01T10:27:46.786 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.325237.log 2026-04-01T10:27:46.789 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.325203.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.325203.log.gz 2026-04-01T10:27:46.795 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.325340.log 2026-04-01T10:27:46.796 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.325237.log: 83.0% -- replaced with /var/log/ceph/ceph-client.admin.325237.log.gz 2026-04-01T10:27:46.801 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.325440.log 2026-04-01T10:27:46.803 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.325340.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.325340.log.gz 2026-04-01T10:27:46.809 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.325537.log 2026-04-01T10:27:46.815 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.325440.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.325571.log 2026-04-01T10:27:46.816 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.325537.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.325537.log.gz 2026-04-01T10:27:46.835 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.325607.log 2026-04-01T10:27:46.836 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.325571.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.325571.log.gz 2026-04-01T10:27:46.849 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.325641.log 2026-04-01T10:27:46.850 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.325607.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.325607.log.gz 2026-04-01T10:27:46.864 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.0.528356.log 2026-04-01T10:27:46.865 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.325641.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.325641.log.gz 2026-04-01T10:27:46.877 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.0.528553.log 2026-04-01T10:27:46.878 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.528356.log: 8.2% -- replaced with /var/log/ceph/ceph-client.0.528356.log.gz 2026-04-01T10:27:46.891 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.528616.log 2026-04-01T10:27:46.892 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.528553.log: 8.2% -- replaced with /var/log/ceph/ceph-client.0.528553.log.gz 2026-04-01T10:27:46.905 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.528616.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.528616.log.gz 2026-04-01T10:27:47.240 INFO:teuthology.orchestra.run.vm00.stderr: 86.8% -- replaced with /var/log/ceph/ceph-client.admin.325440.log.gz 2026-04-01T10:27:47.378 INFO:teuthology.orchestra.run.vm00.stderr: 92.3% -- replaced with /var/log/ceph/ops-log-ceph-client.0.log.gz 2026-04-01T10:27:48.414 INFO:teuthology.orchestra.run.vm03.stderr: 2026-04-01T10:27:48.414 INFO:teuthology.orchestra.run.vm03.stderr:gzip: /var/log/ceph/ceph-osd.7.log.gz: No space left on device 2026-04-01T10:27:48.417 INFO:teuthology.orchestra.run.vm03.stderr: 2026-04-01T10:27:48.417 INFO:teuthology.orchestra.run.vm03.stderr:gzip: /var/log/ceph/ceph-osd.6.log.gz: No space left on device 2026-04-01T10:27:51.508 INFO:teuthology.orchestra.run.vm03.stderr: 2026-04-01T10:27:51.509 INFO:teuthology.orchestra.run.vm03.stderr:gzip: /var/log/ceph/ceph-osd.5.log.gz: No space left on device 2026-04-01T10:27:51.510 INFO:teuthology.orchestra.run.vm03.stderr: 2026-04-01T10:27:51.510 INFO:teuthology.orchestra.run.vm03.stderr:gzip: /var/log/ceph/ceph-osd.4.log.gz: No space left on device 2026-04-01T10:27:51.516 INFO:teuthology.orchestra.run.vm03.stderr: 2026-04-01T10:27:51.516 INFO:teuthology.orchestra.run.vm03.stderr:real 0m6.760s 2026-04-01T10:27:51.516 INFO:teuthology.orchestra.run.vm03.stderr:user 0m18.976s 2026-04-01T10:27:51.516 INFO:teuthology.orchestra.run.vm03.stderr:sys 0m1.057s 2026-04-01T10:28:15.201 INFO:teuthology.orchestra.run.vm00.stderr: 2026-04-01T10:28:15.201 INFO:teuthology.orchestra.run.vm00.stderr:gzip: /var/log/ceph/ceph-osd.3.log.gz: No space left on device 2026-04-01T10:28:15.204 INFO:teuthology.orchestra.run.vm00.stderr: 2026-04-01T10:28:15.204 INFO:teuthology.orchestra.run.vm00.stderr:gzip: /var/log/ceph/rgw.ceph.client.0.log.gz: No space left on device 2026-04-01T10:28:26.189 INFO:teuthology.orchestra.run.vm00.stderr: 93.3% -- replaced with /var/log/ceph/ceph-osd.2.log.gz 2026-04-01T10:28:41.305 INFO:teuthology.orchestra.run.vm00.stderr: 93.3% -- replaced with /var/log/ceph/ceph-osd.1.log.gz 2026-04-01T10:28:41.553 INFO:teuthology.orchestra.run.vm00.stderr: 93.3% -- replaced with /var/log/ceph/ceph-osd.0.log.gz 2026-04-01T10:28:41.554 INFO:teuthology.orchestra.run.vm00.stderr: 2026-04-01T10:28:41.554 INFO:teuthology.orchestra.run.vm00.stderr:real 0m56.817s 2026-04-01T10:28:41.554 INFO:teuthology.orchestra.run.vm00.stderr:user 2m49.737s 2026-04-01T10:28:41.554 INFO:teuthology.orchestra.run.vm00.stderr:sys 0m10.828s 2026-04-01T10:28:41.554 DEBUG:teuthology.orchestra.run:got remote process result: 123 2026-04-01T10:28:41.554 ERROR:teuthology.run_tasks:Manager failed: ceph Traceback (most recent call last): File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph.py", line 2001, in task yield File "/home/teuthos/kshtsk/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/remote.py", line 596, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph.py", line 252, in ceph_log yield File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph.py", line 2011, in task ctx.managers[config['cluster']].wait_for_clean() File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 2919, in wait_for_clean num_active_clean = self.get_num_active_clean() File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 2698, in get_num_active_clean pgs = self.get_pg_stats() File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 2464, in get_pg_stats out = self.raw_cluster_cmd('pg', 'dump', '--format=json') File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 1696, in raw_cluster_cmd return self.run_cluster_cmd(**kwargs).stdout.getvalue() File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph_manager.py", line 1687, in run_cluster_cmd return self.controller.run(**kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/remote.py", line 596, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/kshtsk/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph.py", line 1996, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/ceph.py", line 263, in ceph_log run.wait( File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 123: "time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose --" 2026-04-01T10:28:41.555 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-04-01T10:28:41.557 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/kshtsk/teuthology/teuthology/task/install/__init__.py", line 644, in task yield File "/home/teuthos/kshtsk/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/remote.py", line 596, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' 2026-04-01T10:28:41.558 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-04-01T10:28:41.558 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-04-01T10:28:41.596 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-04-01T10:28:41.598 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-04-01T10:28:41.632 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-04-01T10:28:41.633 DEBUG:teuthology.orchestra.run.vm00:> 2026-04-01T10:28:41.633 DEBUG:teuthology.orchestra.run.vm00:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-04-01T10:28:41.633 DEBUG:teuthology.orchestra.run.vm00:> sudo yum -y remove $d || true 2026-04-01T10:28:41.633 DEBUG:teuthology.orchestra.run.vm00:> done 2026-04-01T10:28:41.639 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-04-01T10:28:41.639 DEBUG:teuthology.orchestra.run.vm03:> 2026-04-01T10:28:41.639 DEBUG:teuthology.orchestra.run.vm03:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-04-01T10:28:41.639 DEBUG:teuthology.orchestra.run.vm03:> sudo yum -y remove $d || true 2026-04-01T10:28:41.640 DEBUG:teuthology.orchestra.run.vm03:> done 2026-04-01T10:28:41.644 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-04-01T10:28:41.644 DEBUG:teuthology.orchestra.run.vm07:> 2026-04-01T10:28:41.644 DEBUG:teuthology.orchestra.run.vm07:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-04-01T10:28:41.644 DEBUG:teuthology.orchestra.run.vm07:> sudo yum -y remove $d || true 2026-04-01T10:28:41.644 DEBUG:teuthology.orchestra.run.vm07:> done 2026-04-01T10:28:41.840 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:41.840 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:41.840 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repo Size 2026-04-01T10:28:41.840 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:41.840 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-04-01T10:28:41.840 INFO:teuthology.orchestra.run.vm07.stdout: ceph-radosgw x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 103 M 2026-04-01T10:28:41.840 INFO:teuthology.orchestra.run.vm07.stdout:Removing unused dependencies: 2026-04-01T10:28:41.840 INFO:teuthology.orchestra.run.vm07.stdout: mailcap noarch 2.1.49-5.el9.0.2 @baseos 78 k 2026-04-01T10:28:41.840 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:41.840 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-04-01T10:28:41.840 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:41.840 INFO:teuthology.orchestra.run.vm07.stdout:Remove 2 Packages 2026-04-01T10:28:41.840 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:41.840 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 103 M 2026-04-01T10:28:41.840 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-04-01T10:28:41.842 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-04-01T10:28:41.842 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-04-01T10:28:41.853 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-04-01T10:28:41.853 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-04-01T10:28:41.882 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-04-01T10:28:41.907 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 1/2 2026-04-01T10:28:41.907 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:41.907 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-04-01T10:28:41.908 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-04-01T10:28:41.908 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-04-01T10:28:41.908 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:41.915 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 1/2 2026-04-01T10:28:41.923 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 1/2 2026-04-01T10:28:41.925 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:41.926 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:41.926 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repo Size 2026-04-01T10:28:41.926 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:41.926 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-04-01T10:28:41.926 INFO:teuthology.orchestra.run.vm03.stdout: ceph-radosgw x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 103 M 2026-04-01T10:28:41.926 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-04-01T10:28:41.926 INFO:teuthology.orchestra.run.vm03.stdout: mailcap noarch 2.1.49-5.el9.0.2 @baseos 78 k 2026-04-01T10:28:41.926 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:41.926 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-04-01T10:28:41.926 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:41.926 INFO:teuthology.orchestra.run.vm03.stdout:Remove 2 Packages 2026-04-01T10:28:41.926 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:41.926 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 103 M 2026-04-01T10:28:41.926 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-04-01T10:28:41.930 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-04-01T10:28:41.930 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-04-01T10:28:41.938 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : mailcap-2.1.49-5.el9.0.2.noarch 2/2 2026-04-01T10:28:41.943 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-04-01T10:28:41.943 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-04-01T10:28:41.975 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-04-01T10:28:41.997 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 1/2 2026-04-01T10:28:41.997 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:41.997 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-04-01T10:28:41.997 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-04-01T10:28:41.997 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-04-01T10:28:41.997 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:42.000 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: mailcap-2.1.49-5.el9.0.2.noarch 2/2 2026-04-01T10:28:42.000 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 1/2 2026-04-01T10:28:42.002 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 1/2 2026-04-01T10:28:42.012 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:42.012 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 1/2 2026-04-01T10:28:42.013 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:42.013 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repo Size 2026-04-01T10:28:42.013 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:42.013 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-04-01T10:28:42.013 INFO:teuthology.orchestra.run.vm00.stdout: ceph-radosgw x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 103 M 2026-04-01T10:28:42.013 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-04-01T10:28:42.013 INFO:teuthology.orchestra.run.vm00.stdout: mailcap noarch 2.1.49-5.el9.0.2 @baseos 78 k 2026-04-01T10:28:42.013 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:42.013 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-04-01T10:28:42.013 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:42.013 INFO:teuthology.orchestra.run.vm00.stdout:Remove 2 Packages 2026-04-01T10:28:42.013 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:42.013 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 103 M 2026-04-01T10:28:42.013 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-04-01T10:28:42.018 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-04-01T10:28:42.018 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-04-01T10:28:42.028 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : mailcap-2.1.49-5.el9.0.2.noarch 2/2 2026-04-01T10:28:42.044 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-04-01T10:28:42.044 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-04-01T10:28:42.078 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : mailcap-2.1.49-5.el9.0.2.noarch 2/2 2026-04-01T10:28:42.078 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:42.078 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-04-01T10:28:42.078 INFO:teuthology.orchestra.run.vm07.stdout: ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:42.078 INFO:teuthology.orchestra.run.vm07.stdout: mailcap-2.1.49-5.el9.0.2.noarch 2026-04-01T10:28:42.078 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:42.078 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:42.078 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-04-01T10:28:42.090 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: mailcap-2.1.49-5.el9.0.2.noarch 2/2 2026-04-01T10:28:42.090 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 1/2 2026-04-01T10:28:42.111 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 1/2 2026-04-01T10:28:42.111 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:42.111 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-04-01T10:28:42.111 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-04-01T10:28:42.111 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-04-01T10:28:42.111 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:42.117 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 1/2 2026-04-01T10:28:42.128 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 1/2 2026-04-01T10:28:42.144 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : mailcap-2.1.49-5.el9.0.2.noarch 2/2 2026-04-01T10:28:42.176 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : mailcap-2.1.49-5.el9.0.2.noarch 2/2 2026-04-01T10:28:42.176 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:42.176 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-04-01T10:28:42.176 INFO:teuthology.orchestra.run.vm03.stdout: ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:42.176 INFO:teuthology.orchestra.run.vm03.stdout: mailcap-2.1.49-5.el9.0.2.noarch 2026-04-01T10:28:42.176 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:42.176 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:42.280 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:42.280 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:42.280 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-04-01T10:28:42.280 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:42.280 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-04-01T10:28:42.280 INFO:teuthology.orchestra.run.vm07.stdout: ceph-test x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 365 M 2026-04-01T10:28:42.280 INFO:teuthology.orchestra.run.vm07.stdout:Removing unused dependencies: 2026-04-01T10:28:42.280 INFO:teuthology.orchestra.run.vm07.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-04-01T10:28:42.280 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-04-01T10:28:42.280 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:42.280 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-04-01T10:28:42.280 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:42.280 INFO:teuthology.orchestra.run.vm07.stdout:Remove 3 Packages 2026-04-01T10:28:42.280 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:42.281 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 366 M 2026-04-01T10:28:42.281 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-04-01T10:28:42.283 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-04-01T10:28:42.283 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-04-01T10:28:42.302 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-04-01T10:28:42.302 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-04-01T10:28:42.381 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:42.382 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:42.382 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-04-01T10:28:42.382 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:42.382 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-04-01T10:28:42.382 INFO:teuthology.orchestra.run.vm03.stdout: ceph-test x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 365 M 2026-04-01T10:28:42.382 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-04-01T10:28:42.382 INFO:teuthology.orchestra.run.vm03.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-04-01T10:28:42.382 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-04-01T10:28:42.382 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:42.382 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-04-01T10:28:42.382 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:42.382 INFO:teuthology.orchestra.run.vm03.stdout:Remove 3 Packages 2026-04-01T10:28:42.382 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:42.382 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 366 M 2026-04-01T10:28:42.382 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-04-01T10:28:42.385 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-04-01T10:28:42.385 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-04-01T10:28:42.411 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-04-01T10:28:42.412 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-04-01T10:28:42.472 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-04-01T10:28:42.544 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-04-01T10:28:42.546 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-test-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/3 2026-04-01T10:28:42.558 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 2/3 2026-04-01T10:28:42.583 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 3/3 2026-04-01T10:28:42.590 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-test-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/3 2026-04-01T10:28:42.614 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 2/3 2026-04-01T10:28:42.646 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 3/3 2026-04-01T10:28:42.698 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: xmlstarlet-1.6.1-20.el9.x86_64 3/3 2026-04-01T10:28:42.698 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-test-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/3 2026-04-01T10:28:42.698 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 2/3 2026-04-01T10:28:42.768 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: xmlstarlet-1.6.1-20.el9.x86_64 3/3 2026-04-01T10:28:42.768 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-test-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/3 2026-04-01T10:28:42.768 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 2/3 2026-04-01T10:28:42.818 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 3/3 2026-04-01T10:28:42.818 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:42.818 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-04-01T10:28:42.818 INFO:teuthology.orchestra.run.vm07.stdout: ceph-test-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 socat-1.7.4.1-8.el9.x86_64 2026-04-01T10:28:42.818 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-04-01T10:28:42.818 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:42.818 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:42.883 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 3/3 2026-04-01T10:28:42.883 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:42.883 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-04-01T10:28:42.883 INFO:teuthology.orchestra.run.vm03.stdout: ceph-test-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 socat-1.7.4.1-8.el9.x86_64 2026-04-01T10:28:42.883 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-04-01T10:28:42.883 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:42.883 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:43.055 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: mailcap-2.1.49-5.el9.0.2.noarch 2/2 2026-04-01T10:28:43.055 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 1/2 2026-04-01T10:28:43.068 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:43.069 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:43.069 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-04-01T10:28:43.069 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:43.069 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-04-01T10:28:43.069 INFO:teuthology.orchestra.run.vm07.stdout: ceph x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 0 2026-04-01T10:28:43.069 INFO:teuthology.orchestra.run.vm07.stdout:Removing unused dependencies: 2026-04-01T10:28:43.069 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mds x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 6.8 M 2026-04-01T10:28:43.069 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 19 M 2026-04-01T10:28:43.069 INFO:teuthology.orchestra.run.vm07.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-04-01T10:28:43.069 INFO:teuthology.orchestra.run.vm07.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-04-01T10:28:43.069 INFO:teuthology.orchestra.run.vm07.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-04-01T10:28:43.069 INFO:teuthology.orchestra.run.vm07.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-04-01T10:28:43.069 INFO:teuthology.orchestra.run.vm07.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-04-01T10:28:43.069 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:43.069 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-04-01T10:28:43.069 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:43.069 INFO:teuthology.orchestra.run.vm07.stdout:Remove 8 Packages 2026-04-01T10:28:43.069 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:43.069 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 28 M 2026-04-01T10:28:43.069 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-04-01T10:28:43.072 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-04-01T10:28:43.073 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-04-01T10:28:43.093 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-04-01T10:28:43.093 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-04-01T10:28:43.133 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-04-01T10:28:43.140 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/8 2026-04-01T10:28:43.144 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-04-01T10:28:43.146 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-04-01T10:28:43.151 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-04-01T10:28:43.153 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-04-01T10:28:43.155 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-04-01T10:28:43.161 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:43.162 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:43.163 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-04-01T10:28:43.163 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:43.163 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-04-01T10:28:43.163 INFO:teuthology.orchestra.run.vm03.stdout: ceph x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 0 2026-04-01T10:28:43.163 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-04-01T10:28:43.163 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 6.8 M 2026-04-01T10:28:43.163 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 19 M 2026-04-01T10:28:43.163 INFO:teuthology.orchestra.run.vm03.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-04-01T10:28:43.163 INFO:teuthology.orchestra.run.vm03.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-04-01T10:28:43.163 INFO:teuthology.orchestra.run.vm03.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-04-01T10:28:43.163 INFO:teuthology.orchestra.run.vm03.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-04-01T10:28:43.163 INFO:teuthology.orchestra.run.vm03.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-04-01T10:28:43.163 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:43.163 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-04-01T10:28:43.163 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:43.163 INFO:teuthology.orchestra.run.vm03.stdout:Remove 8 Packages 2026-04-01T10:28:43.163 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:43.163 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 28 M 2026-04-01T10:28:43.163 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-04-01T10:28:43.166 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : mailcap-2.1.49-5.el9.0.2.noarch 2/2 2026-04-01T10:28:43.166 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:43.166 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-04-01T10:28:43.166 INFO:teuthology.orchestra.run.vm00.stdout: ceph-radosgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:43.166 INFO:teuthology.orchestra.run.vm00.stdout: mailcap-2.1.49-5.el9.0.2.noarch 2026-04-01T10:28:43.166 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:43.166 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:43.166 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-04-01T10:28:43.166 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-04-01T10:28:43.178 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 7/8 2026-04-01T10:28:43.178 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:43.178 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-04-01T10:28:43.178 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-04-01T10:28:43.178 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-04-01T10:28:43.178 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:43.179 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 7/8 2026-04-01T10:28:43.188 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 7/8 2026-04-01T10:28:43.193 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-04-01T10:28:43.193 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-04-01T10:28:43.209 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 8/8 2026-04-01T10:28:43.209 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:43.209 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-04-01T10:28:43.209 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-04-01T10:28:43.209 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-04-01T10:28:43.209 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:43.211 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 8/8 2026-04-01T10:28:43.235 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-04-01T10:28:43.242 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/8 2026-04-01T10:28:43.246 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-04-01T10:28:43.248 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-04-01T10:28:43.251 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-04-01T10:28:43.254 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-04-01T10:28:43.257 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-04-01T10:28:43.278 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 7/8 2026-04-01T10:28:43.278 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:43.278 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-04-01T10:28:43.278 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-04-01T10:28:43.278 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-04-01T10:28:43.278 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:43.278 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 7/8 2026-04-01T10:28:43.287 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 7/8 2026-04-01T10:28:43.306 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 8/8 2026-04-01T10:28:43.306 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/8 2026-04-01T10:28:43.306 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2/8 2026-04-01T10:28:43.306 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 3/8 2026-04-01T10:28:43.306 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-04-01T10:28:43.306 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-04-01T10:28:43.306 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-04-01T10:28:43.306 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-04-01T10:28:43.353 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-04-01T10:28:43.354 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:43.354 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-04-01T10:28:43.354 INFO:teuthology.orchestra.run.vm07.stdout: ceph-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:43.354 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:43.354 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:43.354 INFO:teuthology.orchestra.run.vm07.stdout: lua-5.4.4-4.el9.x86_64 2026-04-01T10:28:43.354 INFO:teuthology.orchestra.run.vm07.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-04-01T10:28:43.354 INFO:teuthology.orchestra.run.vm07.stdout: luarocks-3.9.2-5.el9.noarch 2026-04-01T10:28:43.354 INFO:teuthology.orchestra.run.vm07.stdout: unzip-6.0-59.el9.x86_64 2026-04-01T10:28:43.354 INFO:teuthology.orchestra.run.vm07.stdout: zip-3.0-35.el9.x86_64 2026-04-01T10:28:43.354 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:43.354 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:43.366 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 8/8 2026-04-01T10:28:43.366 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:43.366 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-04-01T10:28:43.366 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-04-01T10:28:43.366 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-04-01T10:28:43.366 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:43.367 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 8/8 2026-04-01T10:28:43.387 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:43.388 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:43.388 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-04-01T10:28:43.388 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:43.388 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-04-01T10:28:43.388 INFO:teuthology.orchestra.run.vm00.stdout: ceph-test x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 365 M 2026-04-01T10:28:43.388 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-04-01T10:28:43.388 INFO:teuthology.orchestra.run.vm00.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-04-01T10:28:43.388 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-04-01T10:28:43.388 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:43.388 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-04-01T10:28:43.388 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:43.388 INFO:teuthology.orchestra.run.vm00.stdout:Remove 3 Packages 2026-04-01T10:28:43.388 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:43.388 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 366 M 2026-04-01T10:28:43.388 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-04-01T10:28:43.391 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-04-01T10:28:43.391 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-04-01T10:28:43.420 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-04-01T10:28:43.421 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-04-01T10:28:43.460 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 8/8 2026-04-01T10:28:43.460 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/8 2026-04-01T10:28:43.460 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2/8 2026-04-01T10:28:43.460 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 3/8 2026-04-01T10:28:43.460 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-04-01T10:28:43.460 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-04-01T10:28:43.460 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-04-01T10:28:43.460 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-04-01T10:28:43.482 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-04-01T10:28:43.490 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-test-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/3 2026-04-01T10:28:43.493 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 2/3 2026-04-01T10:28:43.511 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 3/3 2026-04-01T10:28:43.514 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-04-01T10:28:43.514 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:43.515 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-04-01T10:28:43.515 INFO:teuthology.orchestra.run.vm03.stdout: ceph-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:43.515 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:43.515 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:43.515 INFO:teuthology.orchestra.run.vm03.stdout: lua-5.4.4-4.el9.x86_64 2026-04-01T10:28:43.515 INFO:teuthology.orchestra.run.vm03.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-04-01T10:28:43.515 INFO:teuthology.orchestra.run.vm03.stdout: luarocks-3.9.2-5.el9.noarch 2026-04-01T10:28:43.515 INFO:teuthology.orchestra.run.vm03.stdout: unzip-6.0-59.el9.x86_64 2026-04-01T10:28:43.515 INFO:teuthology.orchestra.run.vm03.stdout: zip-3.0-35.el9.x86_64 2026-04-01T10:28:43.515 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:43.515 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:43.574 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout:=================================================================================================================== 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout:=================================================================================================================== 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: ceph-base x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 24 M 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout:Removing dependent packages: 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: ceph-immutable-object-cache x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 447 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 2.9 M 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-cephadm noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 938 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-dashboard noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 148 M 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-diskprediction-local noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 66 M 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-rook noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 567 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: ceph-osd x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 54 M 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: ceph-volume noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 1.4 M 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: rbd-mirror x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 11 M 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout:Removing unused dependencies: 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: c-ares x86_64 1.19.1-2.el9_4 @baseos 279 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: ceph-common x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 98 M 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: ceph-grafana-dashboards noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 990 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-k8sevents noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 60 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 1.6 M 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: ceph-prometheus-alerts noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 57 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: ceph-selinux x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 138 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: cryptsetup x86_64 2.7.2-4.el9 @baseos 722 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas x86_64 3.0.4-8.el9.0.1 @appstream 68 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-netlib x86_64 3.0.4-8.el9.0.1 @appstream 11 M 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-openblas-openmp x86_64 3.0.4-8.el9.0.1 @appstream 39 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: libcephsqlite x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 409 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: libgfortran x86_64 11.5.0-11.el9 @baseos 2.8 M 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: libquadmath x86_64 11.5.0-11.el9 @baseos 330 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 792 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: libxslt x86_64 1.1.34-13.el9_6 @appstream 751 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: nvme-cli x86_64 2.13-1.el9 @baseos 6.8 M 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: protobuf x86_64 3.14.0-17.el9_7 @appstream 3.5 M 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: protobuf-compiler x86_64 3.14.0-17.el9_7 @crb 2.9 M 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-04-01T10:28:43.581 INFO:teuthology.orchestra.run.vm07.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 816 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-chardet noarch 4.0.0-5.el9 @77d52b2cce1347aa9f3fc60d8b93d222 1.4 M 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-cheroot noarch 10.0.1-5.el9 @epel 682 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy noarch 18.10.0-5.el9 @epel 1.0 M 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-cryptography x86_64 36.0.1-5.el9_6 @baseos 4.5 M 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-devel x86_64 3.9.23-2.el9 @appstream 765 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-idna noarch 2.10-7.el9_4.1 @77d52b2cce1347aa9f3fc60d8b93d222 513 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-influxdb noarch 5.3.1-1.el9 @epel 747 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-isodate noarch 0.6.1-3.el9 @epel 203 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-jinja2 noarch 2.11.3-8.el9_5 @appstream 1.1 M 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-jsonpatch noarch 1.21-16.el9 @0d57cd3fe20446e8b1c08da162742194 55 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-jsonpointer noarch 2.0-4.el9.0.1 @0d57cd3fe20446e8b1c08da162742194 34 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-lxml x86_64 4.6.5-3.el9 @appstream 4.2 M 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-msgpack x86_64 1.0.3-2.el9 @epel 264 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy x86_64 1:1.23.5-2.el9_7 @appstream 30 M 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9_7 @appstream 1.7 M 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-oauthlib noarch 3.1.1-5.el9 @0d57cd3fe20446e8b1c08da162742194 888 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-ply noarch 3.11-14.el9.0.1 @baseos 430 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-prettytable noarch 0.7.2-27.el9.0.1 @0d57cd3fe20446e8b1c08da162742194 166 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-protobuf noarch 3.14.0-17.el9_7 @appstream 1.4 M 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1 noarch 0.4.8-7.el9_7 @appstream 622 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9_7 @appstream 1.0 M 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyparsing noarch 2.4.7-9.el9.0.1 @baseos 635 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-pysocks noarch 1.7.1-12.el9.0.1 @77d52b2cce1347aa9f3fc60d8b93d222 88 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-pytz noarch 2021.1-5.el9 @0d57cd3fe20446e8b1c08da162742194 176 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests noarch 2.25.1-10.el9_6 @baseos 405 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-saml noarch 1.16.0-1.el9 @epel 730 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 72 M 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-toml noarch 0.10.2-6.el9.0.1 @appstream 99 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-urllib3 noarch 1.26.5-6.el9_7.1 @baseos 746 k 2026-04-01T10:28:43.582 INFO:teuthology.orchestra.run.vm07.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-04-01T10:28:43.583 INFO:teuthology.orchestra.run.vm07.stdout: python3-xmlsec x86_64 1.3.13-1.el9 @epel 158 k 2026-04-01T10:28:43.583 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-04-01T10:28:43.583 INFO:teuthology.orchestra.run.vm07.stdout: qatlib x86_64 24.09.0-1.el9 @appstream 588 k 2026-04-01T10:28:43.583 INFO:teuthology.orchestra.run.vm07.stdout: qatlib-service x86_64 24.09.0-1.el9 @appstream 64 k 2026-04-01T10:28:43.583 INFO:teuthology.orchestra.run.vm07.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-04-01T10:28:43.583 INFO:teuthology.orchestra.run.vm07.stdout: smartmontools x86_64 1:7.2-9.el9 @baseos 1.9 M 2026-04-01T10:28:43.583 INFO:teuthology.orchestra.run.vm07.stdout: xmlsec1 x86_64 1.2.29-13.el9 @appstream 596 k 2026-04-01T10:28:43.583 INFO:teuthology.orchestra.run.vm07.stdout: xmlsec1-openssl x86_64 1.2.29-13.el9 @appstream 281 k 2026-04-01T10:28:43.583 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:43.583 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-04-01T10:28:43.583 INFO:teuthology.orchestra.run.vm07.stdout:=================================================================================================================== 2026-04-01T10:28:43.583 INFO:teuthology.orchestra.run.vm07.stdout:Remove 111 Packages 2026-04-01T10:28:43.583 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:43.583 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 687 M 2026-04-01T10:28:43.583 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-04-01T10:28:43.587 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: xmlstarlet-1.6.1-20.el9.x86_64 3/3 2026-04-01T10:28:43.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-test-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/3 2026-04-01T10:28:43.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 2/3 2026-04-01T10:28:43.607 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-04-01T10:28:43.608 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-04-01T10:28:43.633 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 3/3 2026-04-01T10:28:43.633 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:43.633 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-04-01T10:28:43.633 INFO:teuthology.orchestra.run.vm00.stdout: ceph-test-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 socat-1.7.4.1-8.el9.x86_64 2026-04-01T10:28:43.633 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-04-01T10:28:43.633 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:43.633 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:43.726 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-04-01T10:28:43.727 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-04-01T10:28:43.740 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout:=================================================================================================================== 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout:=================================================================================================================== 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 24 M 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout:Removing dependent packages: 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout: ceph-immutable-object-cache x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 447 k 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 2.9 M 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 938 k 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 148 M 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-diskprediction-local noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 66 M 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-rook noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 567 k 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout: ceph-osd x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 54 M 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout: ceph-volume noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 1.4 M 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout: rbd-mirror x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 11 M 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout: c-ares x86_64 1.19.1-2.el9_4 @baseos 279 k 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout: ceph-common x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 98 M 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout: ceph-grafana-dashboards noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 990 k 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 60 k 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 1.6 M 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout: ceph-prometheus-alerts noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 57 k 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout: ceph-selinux x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 138 k 2026-04-01T10:28:43.747 INFO:teuthology.orchestra.run.vm03.stdout: cryptsetup x86_64 2.7.2-4.el9 @baseos 722 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas x86_64 3.0.4-8.el9.0.1 @appstream 68 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-netlib x86_64 3.0.4-8.el9.0.1 @appstream 11 M 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-openblas-openmp x86_64 3.0.4-8.el9.0.1 @appstream 39 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: libcephsqlite x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 409 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: libgfortran x86_64 11.5.0-11.el9 @baseos 2.8 M 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: libquadmath x86_64 11.5.0-11.el9 @baseos 330 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 792 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: libxslt x86_64 1.1.34-13.el9_6 @appstream 751 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: nvme-cli x86_64 2.13-1.el9 @baseos 6.8 M 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: protobuf x86_64 3.14.0-17.el9_7 @appstream 3.5 M 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-compiler x86_64 3.14.0-17.el9_7 @crb 2.9 M 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 816 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-chardet noarch 4.0.0-5.el9 @77d52b2cce1347aa9f3fc60d8b93d222 1.4 M 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-cheroot noarch 10.0.1-5.el9 @epel 682 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy noarch 18.10.0-5.el9 @epel 1.0 M 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-cryptography x86_64 36.0.1-5.el9_6 @baseos 4.5 M 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-devel x86_64 3.9.23-2.el9 @appstream 765 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-idna noarch 2.10-7.el9_4.1 @77d52b2cce1347aa9f3fc60d8b93d222 513 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-influxdb noarch 5.3.1-1.el9 @epel 747 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-isodate noarch 0.6.1-3.el9 @epel 203 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-jinja2 noarch 2.11.3-8.el9_5 @appstream 1.1 M 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-jsonpatch noarch 1.21-16.el9 @0d57cd3fe20446e8b1c08da162742194 55 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-jsonpointer noarch 2.0-4.el9.0.1 @0d57cd3fe20446e8b1c08da162742194 34 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-lxml x86_64 4.6.5-3.el9 @appstream 4.2 M 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-msgpack x86_64 1.0.3-2.el9 @epel 264 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy x86_64 1:1.23.5-2.el9_7 @appstream 30 M 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9_7 @appstream 1.7 M 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-oauthlib noarch 3.1.1-5.el9 @0d57cd3fe20446e8b1c08da162742194 888 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-ply noarch 3.11-14.el9.0.1 @baseos 430 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-04-01T10:28:43.748 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable noarch 0.7.2-27.el9.0.1 @0d57cd3fe20446e8b1c08da162742194 166 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-protobuf noarch 3.14.0-17.el9_7 @appstream 1.4 M 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1 noarch 0.4.8-7.el9_7 @appstream 622 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9_7 @appstream 1.0 M 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyparsing noarch 2.4.7-9.el9.0.1 @baseos 635 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-pysocks noarch 1.7.1-12.el9.0.1 @77d52b2cce1347aa9f3fc60d8b93d222 88 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-pytz noarch 2021.1-5.el9 @0d57cd3fe20446e8b1c08da162742194 176 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests noarch 2.25.1-10.el9_6 @baseos 405 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-saml noarch 1.16.0-1.el9 @epel 730 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 72 M 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-toml noarch 0.10.2-6.el9.0.1 @appstream 99 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-urllib3 noarch 1.26.5-6.el9_7.1 @baseos 746 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-xmlsec x86_64 1.3.13-1.el9 @epel 158 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: qatlib x86_64 24.09.0-1.el9 @appstream 588 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-service x86_64 24.09.0-1.el9 @appstream 64 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools x86_64 1:7.2-9.el9 @baseos 1.9 M 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: xmlsec1 x86_64 1.2.29-13.el9 @appstream 596 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: xmlsec1-openssl x86_64 1.2.29-13.el9 @appstream 281 k 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout:=================================================================================================================== 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout:Remove 111 Packages 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 687 M 2026-04-01T10:28:43.749 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-04-01T10:28:43.774 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-04-01T10:28:43.774 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-04-01T10:28:43.831 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:43.832 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:43.832 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-04-01T10:28:43.832 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:43.832 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-04-01T10:28:43.832 INFO:teuthology.orchestra.run.vm00.stdout: ceph x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 0 2026-04-01T10:28:43.832 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-04-01T10:28:43.832 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 6.8 M 2026-04-01T10:28:43.832 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 19 M 2026-04-01T10:28:43.832 INFO:teuthology.orchestra.run.vm00.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-04-01T10:28:43.832 INFO:teuthology.orchestra.run.vm00.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-04-01T10:28:43.832 INFO:teuthology.orchestra.run.vm00.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-04-01T10:28:43.832 INFO:teuthology.orchestra.run.vm00.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-04-01T10:28:43.832 INFO:teuthology.orchestra.run.vm00.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-04-01T10:28:43.832 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:43.832 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-04-01T10:28:43.832 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:43.832 INFO:teuthology.orchestra.run.vm00.stdout:Remove 8 Packages 2026-04-01T10:28:43.832 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:43.833 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 28 M 2026-04-01T10:28:43.833 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-04-01T10:28:43.835 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-04-01T10:28:43.835 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-04-01T10:28:43.868 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-04-01T10:28:43.868 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-04-01T10:28:43.883 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-04-01T10:28:43.883 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso. 1/111 2026-04-01T10:28:43.891 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso. 1/111 2026-04-01T10:28:43.897 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-04-01T10:28:43.898 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-04-01T10:28:43.905 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-04-01T10:28:43.911 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/8 2026-04-01T10:28:43.913 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 2/111 2026-04-01T10:28:43.913 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:43.913 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-04-01T10:28:43.913 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-04-01T10:28:43.913 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-04-01T10:28:43.913 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:43.913 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 2/111 2026-04-01T10:28:43.914 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-04-01T10:28:43.916 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-04-01T10:28:43.919 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-04-01T10:28:43.922 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-04-01T10:28:43.923 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-04-01T10:28:43.927 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 2/111 2026-04-01T10:28:43.946 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 7/8 2026-04-01T10:28:43.946 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:43.946 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-04-01T10:28:43.946 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-04-01T10:28:43.946 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-04-01T10:28:43.946 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:43.947 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 7/8 2026-04-01T10:28:43.953 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 7/8 2026-04-01T10:28:44.010 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 8/8 2026-04-01T10:28:44.010 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:44.010 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-04-01T10:28:44.010 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-04-01T10:28:44.010 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-04-01T10:28:44.010 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:44.012 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 8/8 2026-04-01T10:28:44.024 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mgr-modules-core-2:20.2.0-8.g0597158282e.el 3/111 2026-04-01T10:28:44.024 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.c 4/111 2026-04-01T10:28:44.046 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.c 4/111 2026-04-01T10:28:44.052 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-influxdb-5.3.1-1.el9.noarch 5/111 2026-04-01T10:28:44.052 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.cly 6/111 2026-04-01T10:28:44.064 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.cly 6/111 2026-04-01T10:28:44.069 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-04-01T10:28:44.069 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso. 1/111 2026-04-01T10:28:44.071 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-cherrypy-18.10.0-5.el9.noarch 7/111 2026-04-01T10:28:44.076 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-cheroot-10.0.1-5.el9.noarch 8/111 2026-04-01T10:28:44.080 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso. 1/111 2026-04-01T10:28:44.085 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 9/111 2026-04-01T10:28:44.089 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 10/111 2026-04-01T10:28:44.098 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 2/111 2026-04-01T10:28:44.098 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:44.098 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-04-01T10:28:44.098 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-04-01T10:28:44.098 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-04-01T10:28:44.098 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:44.099 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 2/111 2026-04-01T10:28:44.101 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 8/8 2026-04-01T10:28:44.101 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/8 2026-04-01T10:28:44.101 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2/8 2026-04-01T10:28:44.101 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 3/8 2026-04-01T10:28:44.101 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-04-01T10:28:44.101 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-04-01T10:28:44.101 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-04-01T10:28:44.101 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-04-01T10:28:44.109 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 11/111 2026-04-01T10:28:44.109 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:44.109 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-04-01T10:28:44.109 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-04-01T10:28:44.109 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-04-01T10:28:44.109 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:44.114 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 2/111 2026-04-01T10:28:44.115 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 11/111 2026-04-01T10:28:44.126 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 11/111 2026-04-01T10:28:44.145 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 12/111 2026-04-01T10:28:44.145 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:44.145 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-04-01T10:28:44.145 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:44.150 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-04-01T10:28:44.150 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:44.150 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-04-01T10:28:44.150 INFO:teuthology.orchestra.run.vm00.stdout: ceph-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:44.150 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:44.150 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:44.150 INFO:teuthology.orchestra.run.vm00.stdout: lua-5.4.4-4.el9.x86_64 2026-04-01T10:28:44.150 INFO:teuthology.orchestra.run.vm00.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-04-01T10:28:44.150 INFO:teuthology.orchestra.run.vm00.stdout: luarocks-3.9.2-5.el9.noarch 2026-04-01T10:28:44.150 INFO:teuthology.orchestra.run.vm00.stdout: unzip-6.0-59.el9.x86_64 2026-04-01T10:28:44.150 INFO:teuthology.orchestra.run.vm00.stdout: zip-3.0-35.el9.x86_64 2026-04-01T10:28:44.150 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:44.150 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:44.154 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 12/111 2026-04-01T10:28:44.165 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 12/111 2026-04-01T10:28:44.167 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 13/111 2026-04-01T10:28:44.173 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 14/111 2026-04-01T10:28:44.178 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jinja2-2.11.3-8.el9_5.noarch 15/111 2026-04-01T10:28:44.208 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-saml-1.16.0-1.el9.noarch 16/111 2026-04-01T10:28:44.214 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-modules-core-2:20.2.0-8.g0597158282e.el 3/111 2026-04-01T10:28:44.215 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.c 4/111 2026-04-01T10:28:44.216 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 17/111 2026-04-01T10:28:44.219 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 18/111 2026-04-01T10:28:44.228 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 19/111 2026-04-01T10:28:44.237 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.c 4/111 2026-04-01T10:28:44.241 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 20/111 2026-04-01T10:28:44.241 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mgr-diskprediction-local-2:20.2.0-8.g059715 21/111 2026-04-01T10:28:44.244 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-influxdb-5.3.1-1.el9.noarch 5/111 2026-04-01T10:28:44.244 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.cly 6/111 2026-04-01T10:28:44.249 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:20.2.0-8.g059715 21/111 2026-04-01T10:28:44.258 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.cly 6/111 2026-04-01T10:28:44.265 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cherrypy-18.10.0-5.el9.noarch 7/111 2026-04-01T10:28:44.269 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cheroot-10.0.1-5.el9.noarch 8/111 2026-04-01T10:28:44.279 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 9/111 2026-04-01T10:28:44.283 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 10/111 2026-04-01T10:28:44.305 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 11/111 2026-04-01T10:28:44.305 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:44.305 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-04-01T10:28:44.305 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-04-01T10:28:44.305 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-04-01T10:28:44.305 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:44.308 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 11/111 2026-04-01T10:28:44.318 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 11/111 2026-04-01T10:28:44.335 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 12/111 2026-04-01T10:28:44.335 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:44.335 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-04-01T10:28:44.335 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:44.342 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 22/111 2026-04-01T10:28:44.344 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 12/111 2026-04-01T10:28:44.356 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 12/111 2026-04-01T10:28:44.357 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:44.358 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 23/111 2026-04-01T10:28:44.360 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 13/111 2026-04-01T10:28:44.364 INFO:teuthology.orchestra.run.vm00.stdout:=================================================================================================================== 2026-04-01T10:28:44.364 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-04-01T10:28:44.364 INFO:teuthology.orchestra.run.vm00.stdout:=================================================================================================================== 2026-04-01T10:28:44.364 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 24 M 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout:Removing dependent packages: 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: ceph-immutable-object-cache x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 447 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 2.9 M 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 938 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 148 M 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-diskprediction-local noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 66 M 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-rook noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 567 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: ceph-osd x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 54 M 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: ceph-volume noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 1.4 M 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: rbd-mirror x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 11 M 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: c-ares x86_64 1.19.1-2.el9_4 @baseos 279 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: ceph-common x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 98 M 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: ceph-grafana-dashboards noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 990 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-k8sevents noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 60 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 1.6 M 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: ceph-prometheus-alerts noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 57 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: ceph-selinux x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 138 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: cryptsetup x86_64 2.7.2-4.el9 @baseos 722 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas x86_64 3.0.4-8.el9.0.1 @appstream 68 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-netlib x86_64 3.0.4-8.el9.0.1 @appstream 11 M 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-openblas-openmp x86_64 3.0.4-8.el9.0.1 @appstream 39 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: libcephsqlite x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 409 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: libgfortran x86_64 11.5.0-11.el9 @baseos 2.8 M 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: libquadmath x86_64 11.5.0-11.el9 @baseos 330 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 792 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: libxslt x86_64 1.1.34-13.el9_6 @appstream 751 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: nvme-cli x86_64 2.13-1.el9 @baseos 6.8 M 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: protobuf x86_64 3.14.0-17.el9_7 @appstream 3.5 M 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-compiler x86_64 3.14.0-17.el9_7 @crb 2.9 M 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-04-01T10:28:44.365 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 816 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-chardet noarch 4.0.0-5.el9 @77d52b2cce1347aa9f3fc60d8b93d222 1.4 M 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-cheroot noarch 10.0.1-5.el9 @epel 682 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy noarch 18.10.0-5.el9 @epel 1.0 M 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-cryptography x86_64 36.0.1-5.el9_6 @baseos 4.5 M 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-devel x86_64 3.9.23-2.el9 @appstream 765 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-idna noarch 2.10-7.el9_4.1 @77d52b2cce1347aa9f3fc60d8b93d222 513 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-influxdb noarch 5.3.1-1.el9 @epel 747 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-isodate noarch 0.6.1-3.el9 @epel 203 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-jinja2 noarch 2.11.3-8.el9_5 @appstream 1.1 M 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-jsonpatch noarch 1.21-16.el9 @0d57cd3fe20446e8b1c08da162742194 55 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-jsonpointer noarch 2.0-4.el9.0.1 @0d57cd3fe20446e8b1c08da162742194 34 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-lxml x86_64 4.6.5-3.el9 @appstream 4.2 M 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-msgpack x86_64 1.0.3-2.el9 @epel 264 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy x86_64 1:1.23.5-2.el9_7 @appstream 30 M 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9_7 @appstream 1.7 M 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-oauthlib noarch 3.1.1-5.el9 @0d57cd3fe20446e8b1c08da162742194 888 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-ply noarch 3.11-14.el9.0.1 @baseos 430 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-prettytable noarch 0.7.2-27.el9.0.1 @0d57cd3fe20446e8b1c08da162742194 166 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-protobuf noarch 3.14.0-17.el9_7 @appstream 1.4 M 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1 noarch 0.4.8-7.el9_7 @appstream 622 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9_7 @appstream 1.0 M 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyparsing noarch 2.4.7-9.el9.0.1 @baseos 635 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-pysocks noarch 1.7.1-12.el9.0.1 @77d52b2cce1347aa9f3fc60d8b93d222 88 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-pytz noarch 2021.1-5.el9 @0d57cd3fe20446e8b1c08da162742194 176 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests noarch 2.25.1-10.el9_6 @baseos 405 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-saml noarch 1.16.0-1.el9 @epel 730 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 72 M 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml noarch 0.10.2-6.el9.0.1 @appstream 99 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-urllib3 noarch 1.26.5-6.el9_7.1 @baseos 746 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-xmlsec x86_64 1.3.13-1.el9 @epel 158 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: qatlib x86_64 24.09.0-1.el9 @appstream 588 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-service x86_64 24.09.0-1.el9 @appstream 64 k 2026-04-01T10:28:44.366 INFO:teuthology.orchestra.run.vm00.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-04-01T10:28:44.367 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools x86_64 1:7.2-9.el9 @baseos 1.9 M 2026-04-01T10:28:44.367 INFO:teuthology.orchestra.run.vm00.stdout: xmlsec1 x86_64 1.2.29-13.el9 @appstream 596 k 2026-04-01T10:28:44.367 INFO:teuthology.orchestra.run.vm00.stdout: xmlsec1-openssl x86_64 1.2.29-13.el9 @appstream 281 k 2026-04-01T10:28:44.367 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:44.367 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-04-01T10:28:44.367 INFO:teuthology.orchestra.run.vm00.stdout:=================================================================================================================== 2026-04-01T10:28:44.367 INFO:teuthology.orchestra.run.vm00.stdout:Remove 111 Packages 2026-04-01T10:28:44.367 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:44.367 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 14/111 2026-04-01T10:28:44.367 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-xmlsec-1.3.13-1.el9.x86_64 24/111 2026-04-01T10:28:44.367 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 687 M 2026-04-01T10:28:44.367 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-04-01T10:28:44.370 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-lxml-4.6.5-3.el9.x86_64 25/111 2026-04-01T10:28:44.372 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jinja2-2.11.3-8.el9_5.noarch 15/111 2026-04-01T10:28:44.383 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 26/111 2026-04-01T10:28:44.383 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-04-01T10:28:44.383 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:44.384 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 26/111 2026-04-01T10:28:44.393 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-04-01T10:28:44.393 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-04-01T10:28:44.411 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-saml-1.16.0-1.el9.noarch 16/111 2026-04-01T10:28:44.413 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 26/111 2026-04-01T10:28:44.419 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9_7.x86_64 27/111 2026-04-01T10:28:44.420 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 17/111 2026-04-01T10:28:44.422 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : xmlsec1-openssl-1.2.29-13.el9.x86_64 28/111 2026-04-01T10:28:44.424 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 18/111 2026-04-01T10:28:44.435 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 19/111 2026-04-01T10:28:44.435 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : xmlsec1-1.2.29-13.el9.x86_64 29/111 2026-04-01T10:28:44.441 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-cryptography-36.0.1-5.el9_6.x86_64 30/111 2026-04-01T10:28:44.443 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 20/111 2026-04-01T10:28:44.443 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-diskprediction-local-2:20.2.0-8.g059715 21/111 2026-04-01T10:28:44.444 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : protobuf-compiler-3.14.0-17.el9_7.x86_64 31/111 2026-04-01T10:28:44.447 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 32/111 2026-04-01T10:28:44.452 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:20.2.0-8.g059715 21/111 2026-04-01T10:28:44.465 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 33/111 2026-04-01T10:28:44.465 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:44.465 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-04-01T10:28:44.465 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-04-01T10:28:44.465 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-04-01T10:28:44.465 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:44.467 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 33/111 2026-04-01T10:28:44.476 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 33/111 2026-04-01T10:28:44.480 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 34/111 2026-04-01T10:28:44.483 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 35/111 2026-04-01T10:28:44.485 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 36/111 2026-04-01T10:28:44.489 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 37/111 2026-04-01T10:28:44.493 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 38/111 2026-04-01T10:28:44.495 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 39/111 2026-04-01T10:28:44.495 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.c 40/111 2026-04-01T10:28:44.553 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-04-01T10:28:44.553 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-04-01T10:28:44.555 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.c 40/111 2026-04-01T10:28:44.558 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 22/111 2026-04-01T10:28:44.564 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 41/111 2026-04-01T10:28:44.568 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 42/111 2026-04-01T10:28:44.577 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-requests-2.25.1-10.el9_6.noarch 43/111 2026-04-01T10:28:44.577 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 23/111 2026-04-01T10:28:44.582 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 44/111 2026-04-01T10:28:44.585 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-xmlsec-1.3.13-1.el9.x86_64 24/111 2026-04-01T10:28:44.589 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-lxml-4.6.5-3.el9.x86_64 25/111 2026-04-01T10:28:44.592 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 45/111 2026-04-01T10:28:44.598 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9_7.noarch 46/111 2026-04-01T10:28:44.603 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 26/111 2026-04-01T10:28:44.603 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-04-01T10:28:44.603 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:44.603 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-urllib3-1.26.5-6.el9_7.1.noarch 47/111 2026-04-01T10:28:44.604 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 26/111 2026-04-01T10:28:44.607 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 48/111 2026-04-01T10:28:44.636 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 26/111 2026-04-01T10:28:44.641 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9_7.x86_64 27/111 2026-04-01T10:28:44.644 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : xmlsec1-openssl-1.2.29-13.el9.x86_64 28/111 2026-04-01T10:28:44.654 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 49/111 2026-04-01T10:28:44.657 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : xmlsec1-1.2.29-13.el9.x86_64 29/111 2026-04-01T10:28:44.665 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cryptography-36.0.1-5.el9_6.x86_64 30/111 2026-04-01T10:28:44.669 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : protobuf-compiler-3.14.0-17.el9_7.x86_64 31/111 2026-04-01T10:28:44.670 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-numpy-1:1.23.5-2.el9_7.x86_64 50/111 2026-04-01T10:28:44.672 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 32/111 2026-04-01T10:28:44.674 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : flexiblas-netlib-3.0.4-8.el9.0.1.x86_64 51/111 2026-04-01T10:28:44.679 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-8.el9.0.1.x86_64 52/111 2026-04-01T10:28:44.681 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 53/111 2026-04-01T10:28:44.685 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libgfortran-11.5.0-11.el9.x86_64 54/111 2026-04-01T10:28:44.688 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 55/111 2026-04-01T10:28:44.691 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 33/111 2026-04-01T10:28:44.691 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:44.691 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-04-01T10:28:44.691 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-04-01T10:28:44.691 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-04-01T10:28:44.691 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:44.692 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 33/111 2026-04-01T10:28:44.702 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 33/111 2026-04-01T10:28:44.706 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 34/111 2026-04-01T10:28:44.709 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 35/111 2026-04-01T10:28:44.710 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-8.g05971582 56/111 2026-04-01T10:28:44.710 INFO:teuthology.orchestra.run.vm07.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:44.710 INFO:teuthology.orchestra.run.vm07.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-04-01T10:28:44.710 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:44.710 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-immutable-object-cache-2:20.2.0-8.g05971582 56/111 2026-04-01T10:28:44.713 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 36/111 2026-04-01T10:28:44.716 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 37/111 2026-04-01T10:28:44.720 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-8.g05971582 56/111 2026-04-01T10:28:44.720 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 38/111 2026-04-01T10:28:44.721 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-04-01T10:28:44.721 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso. 1/111 2026-04-01T10:28:44.722 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 57/111 2026-04-01T10:28:44.723 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 39/111 2026-04-01T10:28:44.723 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.c 40/111 2026-04-01T10:28:44.724 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : flexiblas-3.0.4-8.el9.0.1.x86_64 58/111 2026-04-01T10:28:44.727 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-ply-3.11-14.el9.0.1.noarch 59/111 2026-04-01T10:28:44.730 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso. 1/111 2026-04-01T10:28:44.730 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-idna-2.10-7.el9_4.1.noarch 60/111 2026-04-01T10:28:44.735 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pysocks-1.7.1-12.el9.0.1.noarch 61/111 2026-04-01T10:28:44.740 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pyasn1-0.4.8-7.el9_7.noarch 62/111 2026-04-01T10:28:44.746 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 63/111 2026-04-01T10:28:44.752 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 2/111 2026-04-01T10:28:44.752 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:44.752 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-04-01T10:28:44.753 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-04-01T10:28:44.753 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-04-01T10:28:44.753 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:44.753 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 2/111 2026-04-01T10:28:44.754 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 64/111 2026-04-01T10:28:44.760 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 65/111 2026-04-01T10:28:44.764 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 66/111 2026-04-01T10:28:44.767 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 67/111 2026-04-01T10:28:44.769 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 68/111 2026-04-01T10:28:44.771 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 2/111 2026-04-01T10:28:44.771 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 69/111 2026-04-01T10:28:44.774 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 70/111 2026-04-01T10:28:44.776 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-toml-0.10.2-6.el9.0.1.noarch 71/111 2026-04-01T10:28:44.779 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 72/111 2026-04-01T10:28:44.782 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-pyparsing-2.4.7-9.el9.0.1.noarch 73/111 2026-04-01T10:28:44.785 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.c 40/111 2026-04-01T10:28:44.789 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 74/111 2026-04-01T10:28:44.794 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-devel-3.9.23-2.el9.x86_64 75/111 2026-04-01T10:28:44.795 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 41/111 2026-04-01T10:28:44.796 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-jsonpointer-2.0-4.el9.0.1.noarch 76/111 2026-04-01T10:28:44.799 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 42/111 2026-04-01T10:28:44.800 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 77/111 2026-04-01T10:28:44.803 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-isodate-0.6.1-3.el9.noarch 78/111 2026-04-01T10:28:44.806 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 79/111 2026-04-01T10:28:44.809 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-requests-2.25.1-10.el9_6.noarch 43/111 2026-04-01T10:28:44.811 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 80/111 2026-04-01T10:28:44.815 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 44/111 2026-04-01T10:28:44.815 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-protobuf-3.14.0-17.el9_7.noarch 81/111 2026-04-01T10:28:44.819 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 82/111 2026-04-01T10:28:44.822 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 83/111 2026-04-01T10:28:44.823 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-grafana-dashboards-2:20.2.0-8.g0597158282e. 84/111 2026-04-01T10:28:44.825 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-prometheus-alerts-2:20.2.0-8.g0597158282e.e 85/111 2026-04-01T10:28:44.826 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 45/111 2026-04-01T10:28:44.833 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9_7.noarch 46/111 2026-04-01T10:28:44.838 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-urllib3-1.26.5-6.el9_7.1.noarch 47/111 2026-04-01T10:28:44.843 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 48/111 2026-04-01T10:28:44.845 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 86/111 2026-04-01T10:28:44.845 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-04-01T10:28:44.845 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-04-01T10:28:44.845 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:44.852 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 86/111 2026-04-01T10:28:44.852 INFO:teuthology.orchestra.run.vm07.stdout:warning: file /etc/logrotate.d/ceph: remove failed: No such file or directory 2026-04-01T10:28:44.852 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:44.879 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-modules-core-2:20.2.0-8.g0597158282e.el 3/111 2026-04-01T10:28:44.879 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.c 4/111 2026-04-01T10:28:44.881 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 86/111 2026-04-01T10:28:44.881 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 87/111 2026-04-01T10:28:44.893 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 87/111 2026-04-01T10:28:44.896 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 49/111 2026-04-01T10:28:44.898 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 88/111 2026-04-01T10:28:44.899 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.c 4/111 2026-04-01T10:28:44.901 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-ceph-common-2:20.2.0-8.g0597158282e.el9. 89/111 2026-04-01T10:28:44.904 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-prettytable-0.7.2-27.el9.0.1.noarch 90/111 2026-04-01T10:28:44.904 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x 91/111 2026-04-01T10:28:44.905 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-influxdb-5.3.1-1.el9.noarch 5/111 2026-04-01T10:28:44.905 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.cly 6/111 2026-04-01T10:28:44.908 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-numpy-1:1.23.5-2.el9_7.x86_64 50/111 2026-04-01T10:28:44.911 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : flexiblas-netlib-3.0.4-8.el9.0.1.x86_64 51/111 2026-04-01T10:28:44.914 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-8.el9.0.1.x86_64 52/111 2026-04-01T10:28:44.917 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 53/111 2026-04-01T10:28:44.918 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.cly 6/111 2026-04-01T10:28:44.921 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libgfortran-11.5.0-11.el9.x86_64 54/111 2026-04-01T10:28:44.923 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 55/111 2026-04-01T10:28:44.925 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cherrypy-18.10.0-5.el9.noarch 7/111 2026-04-01T10:28:44.929 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cheroot-10.0.1-5.el9.noarch 8/111 2026-04-01T10:28:44.938 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 9/111 2026-04-01T10:28:44.942 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 10/111 2026-04-01T10:28:44.945 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-8.g05971582 56/111 2026-04-01T10:28:44.945 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:44.945 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-04-01T10:28:44.945 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:44.946 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-immutable-object-cache-2:20.2.0-8.g05971582 56/111 2026-04-01T10:28:44.955 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-8.g05971582 56/111 2026-04-01T10:28:44.957 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 57/111 2026-04-01T10:28:44.960 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : flexiblas-3.0.4-8.el9.0.1.x86_64 58/111 2026-04-01T10:28:44.963 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-ply-3.11-14.el9.0.1.noarch 59/111 2026-04-01T10:28:44.966 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-idna-2.10-7.el9_4.1.noarch 60/111 2026-04-01T10:28:44.966 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 11/111 2026-04-01T10:28:44.967 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:44.967 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-04-01T10:28:44.967 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-04-01T10:28:44.967 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-04-01T10:28:44.967 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:44.970 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 11/111 2026-04-01T10:28:44.972 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pysocks-1.7.1-12.el9.0.1.noarch 61/111 2026-04-01T10:28:44.976 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pyasn1-0.4.8-7.el9_7.noarch 62/111 2026-04-01T10:28:44.979 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 11/111 2026-04-01T10:28:44.983 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 63/111 2026-04-01T10:28:44.993 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 64/111 2026-04-01T10:28:44.996 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 12/111 2026-04-01T10:28:44.996 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:44.996 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-04-01T10:28:44.996 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:45.000 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 65/111 2026-04-01T10:28:45.003 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 66/111 2026-04-01T10:28:45.005 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 12/111 2026-04-01T10:28:45.006 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 67/111 2026-04-01T10:28:45.009 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 68/111 2026-04-01T10:28:45.011 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 69/111 2026-04-01T10:28:45.014 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 70/111 2026-04-01T10:28:45.015 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 12/111 2026-04-01T10:28:45.017 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-toml-0.10.2-6.el9.0.1.noarch 71/111 2026-04-01T10:28:45.018 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 13/111 2026-04-01T10:28:45.020 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 72/111 2026-04-01T10:28:45.023 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pyparsing-2.4.7-9.el9.0.1.noarch 73/111 2026-04-01T10:28:45.024 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 14/111 2026-04-01T10:28:45.029 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jinja2-2.11.3-8.el9_5.noarch 15/111 2026-04-01T10:28:45.032 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 74/111 2026-04-01T10:28:45.037 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-devel-3.9.23-2.el9.x86_64 75/111 2026-04-01T10:28:45.040 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jsonpointer-2.0-4.el9.0.1.noarch 76/111 2026-04-01T10:28:45.043 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 77/111 2026-04-01T10:28:45.047 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-isodate-0.6.1-3.el9.noarch 78/111 2026-04-01T10:28:45.050 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 79/111 2026-04-01T10:28:45.056 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 80/111 2026-04-01T10:28:45.060 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-protobuf-3.14.0-17.el9_7.noarch 81/111 2026-04-01T10:28:45.064 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 82/111 2026-04-01T10:28:45.067 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-saml-1.16.0-1.el9.noarch 16/111 2026-04-01T10:28:45.067 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 83/111 2026-04-01T10:28:45.069 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-grafana-dashboards-2:20.2.0-8.g0597158282e. 84/111 2026-04-01T10:28:45.071 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-prometheus-alerts-2:20.2.0-8.g0597158282e.e 85/111 2026-04-01T10:28:45.074 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 17/111 2026-04-01T10:28:45.077 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 18/111 2026-04-01T10:28:45.088 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 19/111 2026-04-01T10:28:45.091 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 86/111 2026-04-01T10:28:45.091 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-04-01T10:28:45.091 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-04-01T10:28:45.091 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:45.097 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 20/111 2026-04-01T10:28:45.097 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-diskprediction-local-2:20.2.0-8.g059715 21/111 2026-04-01T10:28:45.100 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 86/111 2026-04-01T10:28:45.100 INFO:teuthology.orchestra.run.vm03.stdout:warning: file /etc/logrotate.d/ceph: remove failed: No such file or directory 2026-04-01T10:28:45.100 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:45.105 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:20.2.0-8.g059715 21/111 2026-04-01T10:28:45.125 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 86/111 2026-04-01T10:28:45.125 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 87/111 2026-04-01T10:28:45.208 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 22/111 2026-04-01T10:28:45.226 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 23/111 2026-04-01T10:28:45.233 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-xmlsec-1.3.13-1.el9.x86_64 24/111 2026-04-01T10:28:45.237 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-lxml-4.6.5-3.el9.x86_64 25/111 2026-04-01T10:28:45.251 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 26/111 2026-04-01T10:28:45.252 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-04-01T10:28:45.252 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:45.253 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 26/111 2026-04-01T10:28:45.288 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 26/111 2026-04-01T10:28:45.293 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9_7.x86_64 27/111 2026-04-01T10:28:45.295 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : xmlsec1-openssl-1.2.29-13.el9.x86_64 28/111 2026-04-01T10:28:45.310 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : xmlsec1-1.2.29-13.el9.x86_64 29/111 2026-04-01T10:28:45.316 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cryptography-36.0.1-5.el9_6.x86_64 30/111 2026-04-01T10:28:45.319 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : protobuf-compiler-3.14.0-17.el9_7.x86_64 31/111 2026-04-01T10:28:45.322 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 32/111 2026-04-01T10:28:45.346 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 33/111 2026-04-01T10:28:45.346 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:45.346 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-04-01T10:28:45.346 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-04-01T10:28:45.346 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-04-01T10:28:45.346 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:45.347 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 33/111 2026-04-01T10:28:45.356 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 33/111 2026-04-01T10:28:45.360 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 34/111 2026-04-01T10:28:45.363 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 35/111 2026-04-01T10:28:45.365 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 36/111 2026-04-01T10:28:45.368 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 37/111 2026-04-01T10:28:45.372 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 38/111 2026-04-01T10:28:45.374 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 39/111 2026-04-01T10:28:45.375 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.c 40/111 2026-04-01T10:28:45.434 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.c 40/111 2026-04-01T10:28:45.444 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 41/111 2026-04-01T10:28:45.448 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 42/111 2026-04-01T10:28:45.456 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-requests-2.25.1-10.el9_6.noarch 43/111 2026-04-01T10:28:45.462 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 44/111 2026-04-01T10:28:45.474 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 45/111 2026-04-01T10:28:45.481 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9_7.noarch 46/111 2026-04-01T10:28:45.486 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-urllib3-1.26.5-6.el9_7.1.noarch 47/111 2026-04-01T10:28:45.491 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 48/111 2026-04-01T10:28:45.540 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 49/111 2026-04-01T10:28:45.552 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-numpy-1:1.23.5-2.el9_7.x86_64 50/111 2026-04-01T10:28:45.555 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : flexiblas-netlib-3.0.4-8.el9.0.1.x86_64 51/111 2026-04-01T10:28:45.559 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-8.el9.0.1.x86_64 52/111 2026-04-01T10:28:45.561 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 53/111 2026-04-01T10:28:45.565 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libgfortran-11.5.0-11.el9.x86_64 54/111 2026-04-01T10:28:45.568 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 55/111 2026-04-01T10:28:45.590 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-8.g05971582 56/111 2026-04-01T10:28:45.590 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-04-01T10:28:45.590 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-04-01T10:28:45.590 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:45.590 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-immutable-object-cache-2:20.2.0-8.g05971582 56/111 2026-04-01T10:28:45.599 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-8.g05971582 56/111 2026-04-01T10:28:45.601 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 57/111 2026-04-01T10:28:45.603 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : flexiblas-3.0.4-8.el9.0.1.x86_64 58/111 2026-04-01T10:28:45.607 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-ply-3.11-14.el9.0.1.noarch 59/111 2026-04-01T10:28:45.610 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-idna-2.10-7.el9_4.1.noarch 60/111 2026-04-01T10:28:45.615 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pysocks-1.7.1-12.el9.0.1.noarch 61/111 2026-04-01T10:28:45.620 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pyasn1-0.4.8-7.el9_7.noarch 62/111 2026-04-01T10:28:45.626 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 63/111 2026-04-01T10:28:45.635 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 64/111 2026-04-01T10:28:45.641 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 65/111 2026-04-01T10:28:45.644 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 66/111 2026-04-01T10:28:45.646 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 67/111 2026-04-01T10:28:45.648 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 68/111 2026-04-01T10:28:45.651 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 69/111 2026-04-01T10:28:45.653 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 70/111 2026-04-01T10:28:45.656 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-toml-0.10.2-6.el9.0.1.noarch 71/111 2026-04-01T10:28:45.659 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 72/111 2026-04-01T10:28:45.662 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pyparsing-2.4.7-9.el9.0.1.noarch 73/111 2026-04-01T10:28:45.670 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 74/111 2026-04-01T10:28:45.674 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-devel-3.9.23-2.el9.x86_64 75/111 2026-04-01T10:28:45.676 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jsonpointer-2.0-4.el9.0.1.noarch 76/111 2026-04-01T10:28:45.679 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 77/111 2026-04-01T10:28:45.682 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-isodate-0.6.1-3.el9.noarch 78/111 2026-04-01T10:28:45.684 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 79/111 2026-04-01T10:28:45.690 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 80/111 2026-04-01T10:28:45.693 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-protobuf-3.14.0-17.el9_7.noarch 81/111 2026-04-01T10:28:45.696 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 82/111 2026-04-01T10:28:45.700 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 83/111 2026-04-01T10:28:45.701 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-grafana-dashboards-2:20.2.0-8.g0597158282e. 84/111 2026-04-01T10:28:45.703 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-prometheus-alerts-2:20.2.0-8.g0597158282e.e 85/111 2026-04-01T10:28:45.722 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 86/111 2026-04-01T10:28:45.722 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-04-01T10:28:45.722 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-04-01T10:28:45.722 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:45.729 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 86/111 2026-04-01T10:28:45.729 INFO:teuthology.orchestra.run.vm00.stdout:warning: file /etc/logrotate.d/ceph: remove failed: No such file or directory 2026-04-01T10:28:45.729 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:45.752 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 86/111 2026-04-01T10:28:45.752 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 87/111 2026-04-01T10:28:46.250 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 87/111 2026-04-01T10:28:46.256 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 88/111 2026-04-01T10:28:46.260 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-ceph-common-2:20.2.0-8.g0597158282e.el9. 89/111 2026-04-01T10:28:46.262 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-prettytable-0.7.2-27.el9.0.1.noarch 90/111 2026-04-01T10:28:46.262 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x 91/111 2026-04-01T10:28:46.408 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 87/111 2026-04-01T10:28:46.414 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 88/111 2026-04-01T10:28:46.418 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-ceph-common-2:20.2.0-8.g0597158282e.el9. 89/111 2026-04-01T10:28:46.420 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-prettytable-0.7.2-27.el9.0.1.noarch 90/111 2026-04-01T10:28:46.420 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x 91/111 2026-04-01T10:28:50.135 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x 91/111 2026-04-01T10:28:50.135 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /sys 2026-04-01T10:28:50.135 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /proc 2026-04-01T10:28:50.135 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /mnt 2026-04-01T10:28:50.135 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /var/tmp 2026-04-01T10:28:50.135 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /home 2026-04-01T10:28:50.135 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /root 2026-04-01T10:28:50.135 INFO:teuthology.orchestra.run.vm07.stdout:skipping the directory /tmp 2026-04-01T10:28:50.135 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:50.145 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : qatlib-24.09.0-1.el9.x86_64 92/111 2026-04-01T10:28:50.165 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: qatlib-service-24.09.0-1.el9.x86_64 93/111 2026-04-01T10:28:50.165 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : qatlib-service-24.09.0-1.el9.x86_64 93/111 2026-04-01T10:28:50.173 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: qatlib-service-24.09.0-1.el9.x86_64 93/111 2026-04-01T10:28:50.176 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 94/111 2026-04-01T10:28:50.179 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 95/111 2026-04-01T10:28:50.181 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 96/111 2026-04-01T10:28:50.183 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 97/111 2026-04-01T10:28:50.183 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libradosstriper1-2:20.2.0-8.g0597158282e.el9.cly 98/111 2026-04-01T10:28:50.214 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libradosstriper1-2:20.2.0-8.g0597158282e.el9.cly 98/111 2026-04-01T10:28:50.219 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : nvme-cli-2.13-1.el9.x86_64 99/111 2026-04-01T10:28:50.232 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: smartmontools-1:7.2-9.el9.x86_64 100/111 2026-04-01T10:28:50.232 INFO:teuthology.orchestra.run.vm07.stdout:Removed "/etc/systemd/system/multi-user.target.wants/smartd.service". 2026-04-01T10:28:50.232 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:50.234 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : smartmontools-1:7.2-9.el9.x86_64 100/111 2026-04-01T10:28:50.243 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: smartmontools-1:7.2-9.el9.x86_64 100/111 2026-04-01T10:28:50.245 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 101/111 2026-04-01T10:28:50.247 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libquadmath-11.5.0-11.el9.x86_64 102/111 2026-04-01T10:28:50.249 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : protobuf-3.14.0-17.el9_7.x86_64 103/111 2026-04-01T10:28:50.252 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libxslt-1.1.34-13.el9_6.x86_64 104/111 2026-04-01T10:28:50.255 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 105/111 2026-04-01T10:28:50.262 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 106/111 2026-04-01T10:28:50.270 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : cryptsetup-2.7.2-4.el9.x86_64 107/111 2026-04-01T10:28:50.277 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 108/111 2026-04-01T10:28:50.280 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : c-ares-1.19.1-2.el9_4.x86_64 109/111 2026-04-01T10:28:50.283 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-msgpack-1.0.3-2.el9.x86_64 110/111 2026-04-01T10:28:50.283 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso. 111/111 2026-04-01T10:28:50.383 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso. 111/111 2026-04-01T10:28:50.383 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/111 2026-04-01T10:28:50.383 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : c-ares-1.19.1-2.el9_4.x86_64 2/111 2026-04-01T10:28:50.383 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 3/111 2026-04-01T10:28:50.383 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 4/111 2026-04-01T10:28:50.383 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-grafana-dashboards-2:20.2.0-8.g0597158282e. 5/111 2026-04-01T10:28:50.383 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-immutable-object-cache-2:20.2.0-8.g05971582 6/111 2026-04-01T10:28:50.383 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 7/111 2026-04-01T10:28:50.383 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.cly 8/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.c 9/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-diskprediction-local-2:20.2.0-8.g059715 10/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.c 11/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-modules-core-2:20.2.0-8.g0597158282e.el 12/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso. 13/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 14/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-prometheus-alerts-2:20.2.0-8.g0597158282e.e 15/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x 16/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 17/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : cryptsetup-2.7.2-4.el9.x86_64 18/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : flexiblas-3.0.4-8.el9.0.1.x86_64 19/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : flexiblas-netlib-3.0.4-8.el9.0.1.x86_64 20/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-8.el9.0.1.x86_64 21/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 22/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 23/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 24/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso. 25/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 26/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libgfortran-11.5.0-11.el9.x86_64 27/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 28/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libquadmath-11.5.0-11.el9.x86_64 29/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libradosstriper1-2:20.2.0-8.g0597158282e.el9.cly 30/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 31/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 32/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libxslt-1.1.34-13.el9_6.x86_64 33/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : nvme-cli-2.13-1.el9.x86_64 34/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 35/111 2026-04-01T10:28:50.384 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 36/111 2026-04-01T10:28:50.385 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 37/111 2026-04-01T10:28:50.385 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : protobuf-3.14.0-17.el9_7.x86_64 38/111 2026-04-01T10:28:50.385 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : protobuf-compiler-3.14.0-17.el9_7.x86_64 39/111 2026-04-01T10:28:50.385 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 40/111 2026-04-01T10:28:50.385 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 41/111 2026-04-01T10:28:50.385 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 42/111 2026-04-01T10:28:50.385 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 43/111 2026-04-01T10:28:50.385 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 44/111 2026-04-01T10:28:50.385 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 45/111 2026-04-01T10:28:50.385 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-ceph-common-2:20.2.0-8.g0597158282e.el9. 46/111 2026-04-01T10:28:50.385 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 47/111 2026-04-01T10:28:50.385 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 48/111 2026-04-01T10:28:50.385 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 49/111 2026-04-01T10:28:50.385 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cheroot-10.0.1-5.el9.noarch 50/111 2026-04-01T10:28:50.385 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cherrypy-18.10.0-5.el9.noarch 51/111 2026-04-01T10:28:50.386 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cryptography-36.0.1-5.el9_6.x86_64 52/111 2026-04-01T10:28:50.386 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-devel-3.9.23-2.el9.x86_64 53/111 2026-04-01T10:28:50.386 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 54/111 2026-04-01T10:28:50.386 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 55/111 2026-04-01T10:28:50.386 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 56/111 2026-04-01T10:28:50.386 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-idna-2.10-7.el9_4.1.noarch 57/111 2026-04-01T10:28:50.386 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-influxdb-5.3.1-1.el9.noarch 58/111 2026-04-01T10:28:50.386 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-isodate-0.6.1-3.el9.noarch 59/111 2026-04-01T10:28:50.386 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 60/111 2026-04-01T10:28:50.386 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 61/111 2026-04-01T10:28:50.386 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 62/111 2026-04-01T10:28:50.386 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 63/111 2026-04-01T10:28:50.386 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 64/111 2026-04-01T10:28:50.386 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 65/111 2026-04-01T10:28:50.386 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jinja2-2.11.3-8.el9_5.noarch 66/111 2026-04-01T10:28:50.386 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 67/111 2026-04-01T10:28:50.386 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-jsonpointer-2.0-4.el9.0.1.noarch 68/111 2026-04-01T10:28:50.386 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 69/111 2026-04-01T10:28:50.386 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 70/111 2026-04-01T10:28:50.386 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-lxml-4.6.5-3.el9.x86_64 71/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 72/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 73/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-msgpack-1.0.3-2.el9.x86_64 74/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 75/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-numpy-1:1.23.5-2.el9_7.x86_64 76/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9_7.x86_64 77/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 78/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 79/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-ply-3.11-14.el9.0.1.noarch 80/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 81/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-prettytable-0.7.2-27.el9.0.1.noarch 82/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-protobuf-3.14.0-17.el9_7.noarch 83/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 84/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyasn1-0.4.8-7.el9_7.noarch 85/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9_7.noarch 86/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 87/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.0.1.noarch 88/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pysocks-1.7.1-12.el9.0.1.noarch 89/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 90/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 91/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-requests-2.25.1-10.el9_6.noarch 92/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 93/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 94/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 95/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-saml-1.16.0-1.el9.noarch 96/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 97/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 98/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-toml-0.10.2-6.el9.0.1.noarch 99/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 100/111 2026-04-01T10:28:50.387 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-urllib3-1.26.5-6.el9_7.1.noarch 101/111 2026-04-01T10:28:50.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 102/111 2026-04-01T10:28:50.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-xmlsec-1.3.13-1.el9.x86_64 103/111 2026-04-01T10:28:50.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 104/111 2026-04-01T10:28:50.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : qatlib-24.09.0-1.el9.x86_64 105/111 2026-04-01T10:28:50.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : qatlib-service-24.09.0-1.el9.x86_64 106/111 2026-04-01T10:28:50.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 107/111 2026-04-01T10:28:50.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 108/111 2026-04-01T10:28:50.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : smartmontools-1:7.2-9.el9.x86_64 109/111 2026-04-01T10:28:50.388 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : xmlsec1-1.2.29-13.el9.x86_64 110/111 2026-04-01T10:28:50.462 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : xmlsec1-openssl-1.2.29-13.el9.x86_64 111/111 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: c-ares-1.19.1-2.el9_4.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: ceph-grafana-dashboards-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: ceph-immutable-object-cache-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-diskprediction-local-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: ceph-prometheus-alerts-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: cryptsetup-2.7.2-4.el9.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-3.0.4-8.el9.0.1.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-netlib-3.0.4-8.el9.0.1.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: flexiblas-openblas-openmp-3.0.4-8.el9.0.1.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: grpc-data-1.46.7-10.el9.noarch 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: libconfig-1.7.2-9.el9.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: libgfortran-11.5.0-11.el9.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: liboath-2.6.12-1.el9.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: libquadmath-11.5.0-11.el9.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: libunwind-1.6.2-1.el9.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: libxslt-1.1.34-13.el9_6.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: nvme-cli-2.13-1.el9.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: openblas-0.3.29-1.el9.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: pciutils-3.7.0-7.el9.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: protobuf-3.14.0-17.el9_7.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: protobuf-compiler-3.14.0-17.el9_7.x86_64 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-04-01T10:28:50.463 INFO:teuthology.orchestra.run.vm07.stdout: python3-babel-2.9.1-2.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-cheroot-10.0.1-5.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy-18.10.0-5.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-cryptography-36.0.1-5.el9_6.x86_64 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-devel-3.9.23-2.el9.x86_64 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-idna-2.10-7.el9_4.1.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-influxdb-5.3.1-1.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-isodate-0.6.1-3.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-jinja2-2.11.3-8.el9_5.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-jsonpointer-2.0-4.el9.0.1.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-lxml-4.6.5-3.el9.x86_64 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-msgpack-1.0.3-2.el9.x86_64 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy-1:1.23.5-2.el9_7.x86_64 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-numpy-f2py-1:1.23.5-2.el9_7.x86_64 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-packaging-20.9-5.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-ply-3.11-14.el9.0.1.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-portend-3.1.0-2.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-prettytable-0.7.2-27.el9.0.1.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-protobuf-3.14.0-17.el9_7.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1-0.4.8-7.el9_7.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyasn1-modules-0.4.8-7.el9_7.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-pycparser-2.20-6.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyparsing-2.4.7-9.el9.0.1.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-pysocks-1.7.1-12.el9.0.1.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-pytz-2021.1-5.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-2.25.1-10.el9_6.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-04-01T10:28:50.464 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes-2.5.1-5.el9.noarch 2026-04-01T10:28:50.465 INFO:teuthology.orchestra.run.vm07.stdout: python3-rsa-4.9-2.el9.noarch 2026-04-01T10:28:50.465 INFO:teuthology.orchestra.run.vm07.stdout: python3-saml-1.16.0-1.el9.noarch 2026-04-01T10:28:50.465 INFO:teuthology.orchestra.run.vm07.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-04-01T10:28:50.465 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-04-01T10:28:50.465 INFO:teuthology.orchestra.run.vm07.stdout: python3-toml-0.10.2-6.el9.0.1.noarch 2026-04-01T10:28:50.465 INFO:teuthology.orchestra.run.vm07.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-04-01T10:28:50.465 INFO:teuthology.orchestra.run.vm07.stdout: python3-urllib3-1.26.5-6.el9_7.1.noarch 2026-04-01T10:28:50.465 INFO:teuthology.orchestra.run.vm07.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-04-01T10:28:50.465 INFO:teuthology.orchestra.run.vm07.stdout: python3-xmlsec-1.3.13-1.el9.x86_64 2026-04-01T10:28:50.465 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-04-01T10:28:50.465 INFO:teuthology.orchestra.run.vm07.stdout: qatlib-24.09.0-1.el9.x86_64 2026-04-01T10:28:50.465 INFO:teuthology.orchestra.run.vm07.stdout: qatlib-service-24.09.0-1.el9.x86_64 2026-04-01T10:28:50.465 INFO:teuthology.orchestra.run.vm07.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-04-01T10:28:50.465 INFO:teuthology.orchestra.run.vm07.stdout: rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:50.465 INFO:teuthology.orchestra.run.vm07.stdout: smartmontools-1:7.2-9.el9.x86_64 2026-04-01T10:28:50.465 INFO:teuthology.orchestra.run.vm07.stdout: xmlsec1-1.2.29-13.el9.x86_64 2026-04-01T10:28:50.465 INFO:teuthology.orchestra.run.vm07.stdout: xmlsec1-openssl-1.2.29-13.el9.x86_64 2026-04-01T10:28:50.465 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:50.465 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:50.659 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:50.659 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:50.659 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-04-01T10:28:50.659 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:50.659 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-04-01T10:28:50.659 INFO:teuthology.orchestra.run.vm07.stdout: cephadm noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 1.0 M 2026-04-01T10:28:50.659 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:50.660 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-04-01T10:28:50.660 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:50.660 INFO:teuthology.orchestra.run.vm07.stdout:Remove 1 Package 2026-04-01T10:28:50.660 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:50.660 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 1.0 M 2026-04-01T10:28:50.660 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-04-01T10:28:50.661 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-04-01T10:28:50.661 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-04-01T10:28:50.662 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-04-01T10:28:50.662 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-04-01T10:28:50.678 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-04-01T10:28:50.678 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 1/1 2026-04-01T10:28:50.793 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 1/1 2026-04-01T10:28:50.829 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 1/1 2026-04-01T10:28:50.829 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:50.829 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-04-01T10:28:50.829 INFO:teuthology.orchestra.run.vm07.stdout: cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:50.829 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:50.829 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:50.995 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: ceph-immutable-object-cache 2026-04-01T10:28:50.995 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-04-01T10:28:50.998 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:50.999 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-04-01T10:28:50.999 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:51.162 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: ceph-mgr 2026-04-01T10:28:51.162 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-04-01T10:28:51.165 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:51.166 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-04-01T10:28:51.166 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:51.326 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: ceph-mgr-dashboard 2026-04-01T10:28:51.326 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-04-01T10:28:51.329 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:51.329 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-04-01T10:28:51.329 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:51.497 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-04-01T10:28:51.497 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-04-01T10:28:51.500 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:51.501 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-04-01T10:28:51.501 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:51.678 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: ceph-mgr-rook 2026-04-01T10:28:51.678 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-04-01T10:28:51.680 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:51.681 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-04-01T10:28:51.681 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:51.766 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x 91/111 2026-04-01T10:28:51.766 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /sys 2026-04-01T10:28:51.766 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /proc 2026-04-01T10:28:51.766 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /mnt 2026-04-01T10:28:51.766 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /var/tmp 2026-04-01T10:28:51.766 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /home 2026-04-01T10:28:51.766 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /root 2026-04-01T10:28:51.766 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /tmp 2026-04-01T10:28:51.766 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:51.774 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : qatlib-24.09.0-1.el9.x86_64 92/111 2026-04-01T10:28:51.792 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: qatlib-service-24.09.0-1.el9.x86_64 93/111 2026-04-01T10:28:51.792 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : qatlib-service-24.09.0-1.el9.x86_64 93/111 2026-04-01T10:28:51.801 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: qatlib-service-24.09.0-1.el9.x86_64 93/111 2026-04-01T10:28:51.804 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 94/111 2026-04-01T10:28:51.807 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 95/111 2026-04-01T10:28:51.809 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 96/111 2026-04-01T10:28:51.811 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 97/111 2026-04-01T10:28:51.811 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libradosstriper1-2:20.2.0-8.g0597158282e.el9.cly 98/111 2026-04-01T10:28:51.843 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libradosstriper1-2:20.2.0-8.g0597158282e.el9.cly 98/111 2026-04-01T10:28:51.844 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: ceph-mgr-cephadm 2026-04-01T10:28:51.845 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-04-01T10:28:51.847 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:51.848 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-04-01T10:28:51.848 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:51.848 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : nvme-cli-2.13-1.el9.x86_64 99/111 2026-04-01T10:28:51.862 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: smartmontools-1:7.2-9.el9.x86_64 100/111 2026-04-01T10:28:51.862 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/smartd.service". 2026-04-01T10:28:51.863 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:51.864 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : smartmontools-1:7.2-9.el9.x86_64 100/111 2026-04-01T10:28:51.873 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: smartmontools-1:7.2-9.el9.x86_64 100/111 2026-04-01T10:28:51.876 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 101/111 2026-04-01T10:28:51.878 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libquadmath-11.5.0-11.el9.x86_64 102/111 2026-04-01T10:28:51.881 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : protobuf-3.14.0-17.el9_7.x86_64 103/111 2026-04-01T10:28:51.884 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libxslt-1.1.34-13.el9_6.x86_64 104/111 2026-04-01T10:28:51.888 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 105/111 2026-04-01T10:28:51.894 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 106/111 2026-04-01T10:28:51.902 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : cryptsetup-2.7.2-4.el9.x86_64 107/111 2026-04-01T10:28:51.906 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 108/111 2026-04-01T10:28:51.909 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : c-ares-1.19.1-2.el9_4.x86_64 109/111 2026-04-01T10:28:51.911 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-msgpack-1.0.3-2.el9.x86_64 110/111 2026-04-01T10:28:51.911 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso. 111/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso. 111/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : c-ares-1.19.1-2.el9_4.x86_64 2/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 3/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 4/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-grafana-dashboards-2:20.2.0-8.g0597158282e. 5/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-immutable-object-cache-2:20.2.0-8.g05971582 6/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 7/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.cly 8/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.c 9/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-diskprediction-local-2:20.2.0-8.g059715 10/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.c 11/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-modules-core-2:20.2.0-8.g0597158282e.el 12/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso. 13/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 14/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-prometheus-alerts-2:20.2.0-8.g0597158282e.e 15/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x 16/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 17/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : cryptsetup-2.7.2-4.el9.x86_64 18/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-3.0.4-8.el9.0.1.x86_64 19/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-netlib-3.0.4-8.el9.0.1.x86_64 20/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-8.el9.0.1.x86_64 21/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 22/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 23/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 24/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso. 25/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 26/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libgfortran-11.5.0-11.el9.x86_64 27/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 28/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libquadmath-11.5.0-11.el9.x86_64 29/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libradosstriper1-2:20.2.0-8.g0597158282e.el9.cly 30/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 31/111 2026-04-01T10:28:52.003 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 32/111 2026-04-01T10:28:52.004 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libxslt-1.1.34-13.el9_6.x86_64 33/111 2026-04-01T10:28:52.004 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : nvme-cli-2.13-1.el9.x86_64 34/111 2026-04-01T10:28:52.004 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 35/111 2026-04-01T10:28:52.004 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 36/111 2026-04-01T10:28:52.004 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 37/111 2026-04-01T10:28:52.004 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : protobuf-3.14.0-17.el9_7.x86_64 38/111 2026-04-01T10:28:52.004 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : protobuf-compiler-3.14.0-17.el9_7.x86_64 39/111 2026-04-01T10:28:52.004 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 40/111 2026-04-01T10:28:52.004 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 41/111 2026-04-01T10:28:52.004 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 42/111 2026-04-01T10:28:52.004 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 43/111 2026-04-01T10:28:52.004 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 44/111 2026-04-01T10:28:52.004 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 45/111 2026-04-01T10:28:52.004 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ceph-common-2:20.2.0-8.g0597158282e.el9. 46/111 2026-04-01T10:28:52.004 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 47/111 2026-04-01T10:28:52.004 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 48/111 2026-04-01T10:28:52.004 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 49/111 2026-04-01T10:28:52.004 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cheroot-10.0.1-5.el9.noarch 50/111 2026-04-01T10:28:52.004 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cherrypy-18.10.0-5.el9.noarch 51/111 2026-04-01T10:28:52.005 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cryptography-36.0.1-5.el9_6.x86_64 52/111 2026-04-01T10:28:52.005 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-devel-3.9.23-2.el9.x86_64 53/111 2026-04-01T10:28:52.005 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 54/111 2026-04-01T10:28:52.005 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 55/111 2026-04-01T10:28:52.005 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 56/111 2026-04-01T10:28:52.005 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-idna-2.10-7.el9_4.1.noarch 57/111 2026-04-01T10:28:52.005 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-influxdb-5.3.1-1.el9.noarch 58/111 2026-04-01T10:28:52.005 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-isodate-0.6.1-3.el9.noarch 59/111 2026-04-01T10:28:52.005 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 60/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 61/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 62/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 63/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 64/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 65/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jinja2-2.11.3-8.el9_5.noarch 66/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 67/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jsonpointer-2.0-4.el9.0.1.noarch 68/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 69/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 70/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-lxml-4.6.5-3.el9.x86_64 71/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 72/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 73/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-msgpack-1.0.3-2.el9.x86_64 74/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 75/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-numpy-1:1.23.5-2.el9_7.x86_64 76/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9_7.x86_64 77/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 78/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 79/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ply-3.11-14.el9.0.1.noarch 80/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 81/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-prettytable-0.7.2-27.el9.0.1.noarch 82/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-protobuf-3.14.0-17.el9_7.noarch 83/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 84/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyasn1-0.4.8-7.el9_7.noarch 85/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9_7.noarch 86/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 87/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.0.1.noarch 88/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pysocks-1.7.1-12.el9.0.1.noarch 89/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 90/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 91/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-requests-2.25.1-10.el9_6.noarch 92/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 93/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 94/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 95/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-saml-1.16.0-1.el9.noarch 96/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 97/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 98/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-toml-0.10.2-6.el9.0.1.noarch 99/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 100/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-urllib3-1.26.5-6.el9_7.1.noarch 101/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 102/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-xmlsec-1.3.13-1.el9.x86_64 103/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 104/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatlib-24.09.0-1.el9.x86_64 105/111 2026-04-01T10:28:52.006 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatlib-service-24.09.0-1.el9.x86_64 106/111 2026-04-01T10:28:52.007 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 107/111 2026-04-01T10:28:52.007 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 108/111 2026-04-01T10:28:52.007 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : smartmontools-1:7.2-9.el9.x86_64 109/111 2026-04-01T10:28:52.007 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : xmlsec1-1.2.29-13.el9.x86_64 110/111 2026-04-01T10:28:52.020 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:52.020 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:52.020 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-04-01T10:28:52.020 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:52.020 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-04-01T10:28:52.020 INFO:teuthology.orchestra.run.vm07.stdout: ceph-fuse x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 2.7 M 2026-04-01T10:28:52.020 INFO:teuthology.orchestra.run.vm07.stdout:Removing unused dependencies: 2026-04-01T10:28:52.020 INFO:teuthology.orchestra.run.vm07.stdout: fuse x86_64 2.9.9-17.el9 @baseos 213 k 2026-04-01T10:28:52.020 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:52.020 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-04-01T10:28:52.021 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:52.021 INFO:teuthology.orchestra.run.vm07.stdout:Remove 2 Packages 2026-04-01T10:28:52.021 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:52.021 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 2.9 M 2026-04-01T10:28:52.021 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-04-01T10:28:52.023 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-04-01T10:28:52.023 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-04-01T10:28:52.034 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-04-01T10:28:52.034 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-04-01T10:28:52.064 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-04-01T10:28:52.068 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : ceph-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/2 2026-04-01T10:28:52.082 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : fuse-2.9.9-17.el9.x86_64 2/2 2026-04-01T10:28:52.083 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : xmlsec1-openssl-1.2.29-13.el9.x86_64 111/111 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: c-ares-1.19.1-2.el9_4.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: ceph-grafana-dashboards-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: ceph-immutable-object-cache-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-diskprediction-local-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: ceph-prometheus-alerts-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: cryptsetup-2.7.2-4.el9.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-3.0.4-8.el9.0.1.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-netlib-3.0.4-8.el9.0.1.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-openblas-openmp-3.0.4-8.el9.0.1.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: grpc-data-1.46.7-10.el9.noarch 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: libconfig-1.7.2-9.el9.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: libgfortran-11.5.0-11.el9.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: liboath-2.6.12-1.el9.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: libquadmath-11.5.0-11.el9.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: libunwind-1.6.2-1.el9.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: libxslt-1.1.34-13.el9_6.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: nvme-cli-2.13-1.el9.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: openblas-0.3.29-1.el9.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: pciutils-3.7.0-7.el9.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-3.14.0-17.el9_7.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-compiler-3.14.0-17.el9_7.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: python3-babel-2.9.1-2.el9.noarch 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: python3-cheroot-10.0.1-5.el9.noarch 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy-18.10.0-5.el9.noarch 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: python3-cryptography-36.0.1-5.el9_6.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: python3-devel-3.9.23-2.el9.x86_64 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-04-01T10:28:52.084 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-idna-2.10-7.el9_4.1.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-influxdb-5.3.1-1.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-isodate-0.6.1-3.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-jinja2-2.11.3-8.el9_5.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-jsonpointer-2.0-4.el9.0.1.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-lxml-4.6.5-3.el9.x86_64 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-msgpack-1.0.3-2.el9.x86_64 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-1:1.23.5-2.el9_7.x86_64 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-f2py-1:1.23.5-2.el9_7.x86_64 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-packaging-20.9-5.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-ply-3.11-14.el9.0.1.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-portend-3.1.0-2.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-prettytable-0.7.2-27.el9.0.1.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-protobuf-3.14.0-17.el9_7.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-0.4.8-7.el9_7.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-modules-0.4.8-7.el9_7.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-pycparser-2.20-6.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyparsing-2.4.7-9.el9.0.1.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-pysocks-1.7.1-12.el9.0.1.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-pytz-2021.1-5.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-2.25.1-10.el9_6.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes-2.5.1-5.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa-4.9-2.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-saml-1.16.0-1.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml-0.10.2-6.el9.0.1.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-urllib3-1.26.5-6.el9_7.1.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-xmlsec-1.3.13-1.el9.x86_64 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-24.09.0-1.el9.x86_64 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-service-24.09.0-1.el9.x86_64 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools-1:7.2-9.el9.x86_64 2026-04-01T10:28:52.085 INFO:teuthology.orchestra.run.vm00.stdout: xmlsec1-1.2.29-13.el9.x86_64 2026-04-01T10:28:52.086 INFO:teuthology.orchestra.run.vm00.stdout: xmlsec1-openssl-1.2.29-13.el9.x86_64 2026-04-01T10:28:52.086 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:52.086 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:52.144 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: fuse-2.9.9-17.el9.x86_64 2/2 2026-04-01T10:28:52.144 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : ceph-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/2 2026-04-01T10:28:52.147 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x 91/111 2026-04-01T10:28:52.147 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /sys 2026-04-01T10:28:52.147 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /proc 2026-04-01T10:28:52.147 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /mnt 2026-04-01T10:28:52.147 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /var/tmp 2026-04-01T10:28:52.147 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /home 2026-04-01T10:28:52.147 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /root 2026-04-01T10:28:52.147 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /tmp 2026-04-01T10:28:52.147 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:52.155 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : qatlib-24.09.0-1.el9.x86_64 92/111 2026-04-01T10:28:52.174 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: qatlib-service-24.09.0-1.el9.x86_64 93/111 2026-04-01T10:28:52.174 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : qatlib-service-24.09.0-1.el9.x86_64 93/111 2026-04-01T10:28:52.184 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: qatlib-service-24.09.0-1.el9.x86_64 93/111 2026-04-01T10:28:52.185 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : fuse-2.9.9-17.el9.x86_64 2/2 2026-04-01T10:28:52.185 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:52.185 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-04-01T10:28:52.185 INFO:teuthology.orchestra.run.vm07.stdout: ceph-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 fuse-2.9.9-17.el9.x86_64 2026-04-01T10:28:52.185 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:52.185 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:52.189 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 94/111 2026-04-01T10:28:52.192 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 95/111 2026-04-01T10:28:52.195 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 96/111 2026-04-01T10:28:52.197 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 97/111 2026-04-01T10:28:52.197 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libradosstriper1-2:20.2.0-8.g0597158282e.el9.cly 98/111 2026-04-01T10:28:52.228 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libradosstriper1-2:20.2.0-8.g0597158282e.el9.cly 98/111 2026-04-01T10:28:52.234 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : nvme-cli-2.13-1.el9.x86_64 99/111 2026-04-01T10:28:52.250 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: smartmontools-1:7.2-9.el9.x86_64 100/111 2026-04-01T10:28:52.250 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/smartd.service". 2026-04-01T10:28:52.250 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:52.252 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : smartmontools-1:7.2-9.el9.x86_64 100/111 2026-04-01T10:28:52.261 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: smartmontools-1:7.2-9.el9.x86_64 100/111 2026-04-01T10:28:52.263 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 101/111 2026-04-01T10:28:52.266 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libquadmath-11.5.0-11.el9.x86_64 102/111 2026-04-01T10:28:52.268 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : protobuf-3.14.0-17.el9_7.x86_64 103/111 2026-04-01T10:28:52.271 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libxslt-1.1.34-13.el9_6.x86_64 104/111 2026-04-01T10:28:52.275 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 105/111 2026-04-01T10:28:52.281 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 106/111 2026-04-01T10:28:52.289 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : cryptsetup-2.7.2-4.el9.x86_64 107/111 2026-04-01T10:28:52.294 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 108/111 2026-04-01T10:28:52.295 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:52.296 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:52.296 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-04-01T10:28:52.296 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:52.296 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-04-01T10:28:52.296 INFO:teuthology.orchestra.run.vm00.stdout: cephadm noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 1.0 M 2026-04-01T10:28:52.296 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:52.296 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-04-01T10:28:52.296 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:52.296 INFO:teuthology.orchestra.run.vm00.stdout:Remove 1 Package 2026-04-01T10:28:52.296 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:52.296 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 1.0 M 2026-04-01T10:28:52.296 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-04-01T10:28:52.297 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : c-ares-1.19.1-2.el9_4.x86_64 109/111 2026-04-01T10:28:52.298 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-04-01T10:28:52.298 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-04-01T10:28:52.299 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-04-01T10:28:52.299 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-04-01T10:28:52.300 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-msgpack-1.0.3-2.el9.x86_64 110/111 2026-04-01T10:28:52.300 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso. 111/111 2026-04-01T10:28:52.316 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-04-01T10:28:52.316 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 1/1 2026-04-01T10:28:52.357 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: ceph-volume 2026-04-01T10:28:52.357 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-04-01T10:28:52.360 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:52.361 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-04-01T10:28:52.361 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:52.403 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso. 111/111 2026-04-01T10:28:52.404 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/111 2026-04-01T10:28:52.404 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : c-ares-1.19.1-2.el9_4.x86_64 2/111 2026-04-01T10:28:52.404 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 3/111 2026-04-01T10:28:52.404 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x8 4/111 2026-04-01T10:28:52.404 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-grafana-dashboards-2:20.2.0-8.g0597158282e. 5/111 2026-04-01T10:28:52.404 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-immutable-object-cache-2:20.2.0-8.g05971582 6/111 2026-04-01T10:28:52.404 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 7/111 2026-04-01T10:28:52.404 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.cly 8/111 2026-04-01T10:28:52.404 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.c 9/111 2026-04-01T10:28:52.404 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-diskprediction-local-2:20.2.0-8.g059715 10/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.c 11/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-modules-core-2:20.2.0-8.g0597158282e.el 12/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso. 13/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_6 14/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-prometheus-alerts-2:20.2.0-8.g0597158282e.e 15/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x 16/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.no 17/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : cryptsetup-2.7.2-4.el9.x86_64 18/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-3.0.4-8.el9.0.1.x86_64 19/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-netlib-3.0.4-8.el9.0.1.x86_64 20/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-8.el9.0.1.x86_64 21/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 22/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 23/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 24/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso. 25/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 26/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libgfortran-11.5.0-11.el9.x86_64 27/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 28/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libquadmath-11.5.0-11.el9.x86_64 29/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libradosstriper1-2:20.2.0-8.g0597158282e.el9.cly 30/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 31/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 32/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libxslt-1.1.34-13.el9_6.x86_64 33/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : nvme-cli-2.13-1.el9.x86_64 34/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 35/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 36/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 37/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : protobuf-3.14.0-17.el9_7.x86_64 38/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : protobuf-compiler-3.14.0-17.el9_7.x86_64 39/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 40/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 41/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 42/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 43/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 44/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 45/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ceph-common-2:20.2.0-8.g0597158282e.el9. 46/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 47/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 48/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 49/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cheroot-10.0.1-5.el9.noarch 50/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cherrypy-18.10.0-5.el9.noarch 51/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cryptography-36.0.1-5.el9_6.x86_64 52/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-devel-3.9.23-2.el9.x86_64 53/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 54/111 2026-04-01T10:28:52.405 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 55/111 2026-04-01T10:28:52.406 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 56/111 2026-04-01T10:28:52.406 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-idna-2.10-7.el9_4.1.noarch 57/111 2026-04-01T10:28:52.406 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-influxdb-5.3.1-1.el9.noarch 58/111 2026-04-01T10:28:52.406 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-isodate-0.6.1-3.el9.noarch 59/111 2026-04-01T10:28:52.406 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 60/111 2026-04-01T10:28:52.406 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 61/111 2026-04-01T10:28:52.406 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 62/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 63/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 64/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 65/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jinja2-2.11.3-8.el9_5.noarch 66/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 67/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jsonpointer-2.0-4.el9.0.1.noarch 68/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 69/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 70/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-lxml-4.6.5-3.el9.x86_64 71/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 72/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 73/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-msgpack-1.0.3-2.el9.x86_64 74/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 75/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-numpy-1:1.23.5-2.el9_7.x86_64 76/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9_7.x86_64 77/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 78/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 79/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ply-3.11-14.el9.0.1.noarch 80/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 81/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-prettytable-0.7.2-27.el9.0.1.noarch 82/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-protobuf-3.14.0-17.el9_7.noarch 83/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 84/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyasn1-0.4.8-7.el9_7.noarch 85/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9_7.noarch 86/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 87/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.0.1.noarch 88/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pysocks-1.7.1-12.el9.0.1.noarch 89/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 90/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 91/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-requests-2.25.1-10.el9_6.noarch 92/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 93/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 94/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 95/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-saml-1.16.0-1.el9.noarch 96/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 97/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 98/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-toml-0.10.2-6.el9.0.1.noarch 99/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 100/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-urllib3-1.26.5-6.el9_7.1.noarch 101/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 102/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-xmlsec-1.3.13-1.el9.x86_64 103/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 104/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatlib-24.09.0-1.el9.x86_64 105/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatlib-service-24.09.0-1.el9.x86_64 106/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 107/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86 108/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : smartmontools-1:7.2-9.el9.x86_64 109/111 2026-04-01T10:28:52.407 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : xmlsec1-1.2.29-13.el9.x86_64 110/111 2026-04-01T10:28:52.433 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 1/1 2026-04-01T10:28:52.470 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 1/1 2026-04-01T10:28:52.470 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:52.470 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-04-01T10:28:52.470 INFO:teuthology.orchestra.run.vm00.stdout: cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:52.470 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:52.470 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : xmlsec1-openssl-1.2.29-13.el9.x86_64 111/111 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: c-ares-1.19.1-2.el9_4.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: ceph-grafana-dashboards-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: ceph-immutable-object-cache-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-diskprediction-local-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-rook-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: ceph-osd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: ceph-prometheus-alerts-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: ceph-selinux-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: ceph-volume-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: cryptsetup-2.7.2-4.el9.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-3.0.4-8.el9.0.1.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-netlib-3.0.4-8.el9.0.1.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-openblas-openmp-3.0.4-8.el9.0.1.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: grpc-data-1.46.7-10.el9.noarch 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: libcephsqlite-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: libconfig-1.7.2-9.el9.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: libgfortran-11.5.0-11.el9.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: liboath-2.6.12-1.el9.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: libquadmath-11.5.0-11.el9.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: libunwind-1.6.2-1.el9.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: libxslt-1.1.34-13.el9_6.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: nvme-cli-2.13-1.el9.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: openblas-0.3.29-1.el9.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: pciutils-3.7.0-7.el9.x86_64 2026-04-01T10:28:52.487 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-3.14.0-17.el9_7.x86_64 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-compiler-3.14.0-17.el9_7.x86_64 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-babel-2.9.1-2.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-cheroot-10.0.1-5.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy-18.10.0-5.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-cryptography-36.0.1-5.el9_6.x86_64 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-devel-3.9.23-2.el9.x86_64 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-idna-2.10-7.el9_4.1.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-influxdb-5.3.1-1.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-isodate-0.6.1-3.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-jinja2-2.11.3-8.el9_5.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-jsonpointer-2.0-4.el9.0.1.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-lxml-4.6.5-3.el9.x86_64 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-msgpack-1.0.3-2.el9.x86_64 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-1:1.23.5-2.el9_7.x86_64 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-f2py-1:1.23.5-2.el9_7.x86_64 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-packaging-20.9-5.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-ply-3.11-14.el9.0.1.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend-3.1.0-2.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable-0.7.2-27.el9.0.1.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-protobuf-3.14.0-17.el9_7.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-0.4.8-7.el9_7.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-modules-0.4.8-7.el9_7.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-pycparser-2.20-6.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyparsing-2.4.7-9.el9.0.1.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-pysocks-1.7.1-12.el9.0.1.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-pytz-2021.1-5.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-2.25.1-10.el9_6.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes-2.5.1-5.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa-4.9-2.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-saml-1.16.0-1.el9.noarch 2026-04-01T10:28:52.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-04-01T10:28:52.489 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-04-01T10:28:52.489 INFO:teuthology.orchestra.run.vm03.stdout: python3-toml-0.10.2-6.el9.0.1.noarch 2026-04-01T10:28:52.489 INFO:teuthology.orchestra.run.vm03.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-04-01T10:28:52.489 INFO:teuthology.orchestra.run.vm03.stdout: python3-urllib3-1.26.5-6.el9_7.1.noarch 2026-04-01T10:28:52.489 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-04-01T10:28:52.489 INFO:teuthology.orchestra.run.vm03.stdout: python3-xmlsec-1.3.13-1.el9.x86_64 2026-04-01T10:28:52.489 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-04-01T10:28:52.489 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-24.09.0-1.el9.x86_64 2026-04-01T10:28:52.489 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-service-24.09.0-1.el9.x86_64 2026-04-01T10:28:52.489 INFO:teuthology.orchestra.run.vm03.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-04-01T10:28:52.489 INFO:teuthology.orchestra.run.vm03.stdout: rbd-mirror-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.489 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools-1:7.2-9.el9.x86_64 2026-04-01T10:28:52.489 INFO:teuthology.orchestra.run.vm03.stdout: xmlsec1-1.2.29-13.el9.x86_64 2026-04-01T10:28:52.489 INFO:teuthology.orchestra.run.vm03.stdout: xmlsec1-openssl-1.2.29-13.el9.x86_64 2026-04-01T10:28:52.489 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:52.489 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:52.547 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:52.547 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:52.547 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repo Size 2026-04-01T10:28:52.547 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:52.547 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-04-01T10:28:52.547 INFO:teuthology.orchestra.run.vm07.stdout: librados-devel x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 449 k 2026-04-01T10:28:52.547 INFO:teuthology.orchestra.run.vm07.stdout:Removing dependent packages: 2026-04-01T10:28:52.547 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-devel x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 154 k 2026-04-01T10:28:52.547 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:52.547 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-04-01T10:28:52.547 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:52.547 INFO:teuthology.orchestra.run.vm07.stdout:Remove 2 Packages 2026-04-01T10:28:52.547 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:52.548 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 603 k 2026-04-01T10:28:52.548 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-04-01T10:28:52.550 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-04-01T10:28:52.550 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-04-01T10:28:52.561 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-04-01T10:28:52.561 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-04-01T10:28:52.590 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-04-01T10:28:52.592 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libcephfs-devel-2:20.2.0-8.g0597158282e.el9.clyso.x8 1/2 2026-04-01T10:28:52.606 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : librados-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86 2/2 2026-04-01T10:28:52.643 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: ceph-immutable-object-cache 2026-04-01T10:28:52.643 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-04-01T10:28:52.647 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:52.647 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-04-01T10:28:52.648 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:52.669 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librados-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86 2/2 2026-04-01T10:28:52.669 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephfs-devel-2:20.2.0-8.g0597158282e.el9.clyso.x8 1/2 2026-04-01T10:28:52.698 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:52.699 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:52.699 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-04-01T10:28:52.699 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:52.699 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-04-01T10:28:52.699 INFO:teuthology.orchestra.run.vm03.stdout: cephadm noarch 2:20.2.0-8.g0597158282e.el9.clyso @ceph-noarch 1.0 M 2026-04-01T10:28:52.699 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:52.699 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-04-01T10:28:52.699 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:52.699 INFO:teuthology.orchestra.run.vm03.stdout:Remove 1 Package 2026-04-01T10:28:52.699 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:52.699 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 1.0 M 2026-04-01T10:28:52.699 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-04-01T10:28:52.701 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-04-01T10:28:52.701 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-04-01T10:28:52.702 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-04-01T10:28:52.702 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-04-01T10:28:52.709 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librados-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86 2/2 2026-04-01T10:28:52.709 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:52.709 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-04-01T10:28:52.709 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.709 INFO:teuthology.orchestra.run.vm07.stdout: librados-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:52.709 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:52.709 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:52.721 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-04-01T10:28:52.721 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 1/1 2026-04-01T10:28:52.811 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: ceph-mgr 2026-04-01T10:28:52.811 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-04-01T10:28:52.814 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:52.815 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-04-01T10:28:52.815 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:52.846 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 1/1 2026-04-01T10:28:52.884 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 1/1 2026-04-01T10:28:52.884 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:52.884 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-04-01T10:28:52.884 INFO:teuthology.orchestra.run.vm03.stdout: cephadm-2:20.2.0-8.g0597158282e.el9.clyso.noarch 2026-04-01T10:28:52.884 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:52.884 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:52.896 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:52.897 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:52.897 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repo Size 2026-04-01T10:28:52.897 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:52.897 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-04-01T10:28:52.897 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 2.5 M 2026-04-01T10:28:52.897 INFO:teuthology.orchestra.run.vm07.stdout:Removing dependent packages: 2026-04-01T10:28:52.897 INFO:teuthology.orchestra.run.vm07.stdout: python3-cephfs x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 510 k 2026-04-01T10:28:52.897 INFO:teuthology.orchestra.run.vm07.stdout:Removing unused dependencies: 2026-04-01T10:28:52.897 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-daemon x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 90 k 2026-04-01T10:28:52.897 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-proxy2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 52 k 2026-04-01T10:28:52.897 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-argparse x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 187 k 2026-04-01T10:28:52.897 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:52.897 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-04-01T10:28:52.897 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:52.897 INFO:teuthology.orchestra.run.vm07.stdout:Remove 5 Packages 2026-04-01T10:28:52.897 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:52.897 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 3.3 M 2026-04-01T10:28:52.897 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-04-01T10:28:52.899 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-04-01T10:28:52.899 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-04-01T10:28:52.910 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-04-01T10:28:52.910 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-04-01T10:28:52.936 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-04-01T10:28:52.938 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-cephfs-2:20.2.0-8.g0597158282e.el9.clyso.x86 1/5 2026-04-01T10:28:52.939 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-ceph-argparse-2:20.2.0-8.g0597158282e.el9.cl 2/5 2026-04-01T10:28:52.940 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.clyso.x 3/5 2026-04-01T10:28:52.952 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.clyso.x 3/5 2026-04-01T10:28:52.954 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libcephfs-daemon-2:20.2.0-8.g0597158282e.el9.clyso.x 4/5 2026-04-01T10:28:52.954 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 5/5 2026-04-01T10:28:52.987 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: ceph-mgr-dashboard 2026-04-01T10:28:52.987 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-04-01T10:28:52.991 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:52.992 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-04-01T10:28:52.992 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:53.015 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 5/5 2026-04-01T10:28:53.015 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephfs-daemon-2:20.2.0-8.g0597158282e.el9.clyso.x 1/5 2026-04-01T10:28:53.015 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.clyso.x 2/5 2026-04-01T10:28:53.015 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 3/5 2026-04-01T10:28:53.015 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-ceph-argparse-2:20.2.0-8.g0597158282e.el9.cl 4/5 2026-04-01T10:28:53.055 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-cephfs-2:20.2.0-8.g0597158282e.el9.clyso.x86 5/5 2026-04-01T10:28:53.055 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:53.055 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-04-01T10:28:53.055 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-daemon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:53.055 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:53.055 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:53.055 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-argparse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:53.055 INFO:teuthology.orchestra.run.vm07.stdout: python3-cephfs-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:53.055 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:53.055 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:53.068 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-immutable-object-cache 2026-04-01T10:28:53.068 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-04-01T10:28:53.071 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:53.072 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-04-01T10:28:53.072 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:53.164 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-04-01T10:28:53.164 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-04-01T10:28:53.167 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:53.168 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-04-01T10:28:53.168 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:53.227 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: libcephfs-devel 2026-04-01T10:28:53.227 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-04-01T10:28:53.231 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:53.231 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-04-01T10:28:53.231 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:53.245 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-mgr 2026-04-01T10:28:53.246 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-04-01T10:28:53.249 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:53.250 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-04-01T10:28:53.250 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:53.331 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: ceph-mgr-rook 2026-04-01T10:28:53.331 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-04-01T10:28:53.335 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:53.335 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-04-01T10:28:53.335 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:53.409 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: Package Arch Version Repository Size 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout:Removing: 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: librados2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 12 M 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout:Removing dependent packages: 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: python3-rados x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 1.1 M 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: python3-rbd x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 1.1 M 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: python3-rgw x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 265 k 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: qemu-kvm-block-rbd x86_64 17:9.1.0-29.el9_7.6 @appstream 41 k 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: rbd-fuse x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 238 k 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: rbd-nbd x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 498 k 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout:Removing unused dependencies: 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: boost-program-options 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: x86_64 1.75.0-13.el9_7 @appstream 276 k 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 461 k 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: librbd1 x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 10 M 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: librgw2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 27 M 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: lmdb-libs x86_64 0.9.29-3.el9 @baseos 106 k 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout:Transaction Summary 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout:================================================================================ 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout:Remove 21 Packages 2026-04-01T10:28:53.411 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:53.412 INFO:teuthology.orchestra.run.vm07.stdout:Freed space: 84 M 2026-04-01T10:28:53.412 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction check 2026-04-01T10:28:53.416 INFO:teuthology.orchestra.run.vm07.stdout:Transaction check succeeded. 2026-04-01T10:28:53.416 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction test 2026-04-01T10:28:53.422 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-mgr-dashboard 2026-04-01T10:28:53.422 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-04-01T10:28:53.425 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:53.426 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-04-01T10:28:53.426 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:53.432 INFO:teuthology.orchestra.run.vm07.stdout:Transaction test succeeded. 2026-04-01T10:28:53.432 INFO:teuthology.orchestra.run.vm07.stdout:Running transaction 2026-04-01T10:28:53.469 INFO:teuthology.orchestra.run.vm07.stdout: Preparing : 1/1 2026-04-01T10:28:53.472 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : rbd-nbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/21 2026-04-01T10:28:53.474 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : rbd-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2/21 2026-04-01T10:28:53.478 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-rgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 3/21 2026-04-01T10:28:53.478 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 4/21 2026-04-01T10:28:53.491 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 4/21 2026-04-01T10:28:53.494 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/21 2026-04-01T10:28:53.496 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-rbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 6/21 2026-04-01T10:28:53.498 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : python3-rados-2:20.2.0-8.g0597158282e.el9.clyso.x8 7/21 2026-04-01T10:28:53.500 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : qemu-kvm-block-rbd-17:9.1.0-29.el9_7.6.x86_64 8/21 2026-04-01T10:28:53.500 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: ceph-mgr-cephadm 2026-04-01T10:28:53.500 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-04-01T10:28:53.502 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/21 2026-04-01T10:28:53.502 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 10/21 2026-04-01T10:28:53.503 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:53.504 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-04-01T10:28:53.504 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:53.517 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 10/21 2026-04-01T10:28:53.517 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 11/21 2026-04-01T10:28:53.517 INFO:teuthology.orchestra.run.vm07.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-04-01T10:28:53.517 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:53.533 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 11/21 2026-04-01T10:28:53.535 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/21 2026-04-01T10:28:53.538 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/21 2026-04-01T10:28:53.542 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/21 2026-04-01T10:28:53.545 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/21 2026-04-01T10:28:53.547 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/21 2026-04-01T10:28:53.549 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/21 2026-04-01T10:28:53.551 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : boost-program-options-1.75.0-13.el9_7.x86_64 18/21 2026-04-01T10:28:53.553 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : lmdb-libs-0.9.29-3.el9.x86_64 19/21 2026-04-01T10:28:53.556 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 20/21 2026-04-01T10:28:53.570 INFO:teuthology.orchestra.run.vm07.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 21/21 2026-04-01T10:28:53.604 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-04-01T10:28:53.604 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-04-01T10:28:53.607 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:53.608 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-04-01T10:28:53.608 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:53.635 INFO:teuthology.orchestra.run.vm07.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 21/21 2026-04-01T10:28:53.636 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : boost-program-options-1.75.0-13.el9_7.x86_64 1/21 2026-04-01T10:28:53.636 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/21 2026-04-01T10:28:53.636 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/21 2026-04-01T10:28:53.636 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/21 2026-04-01T10:28:53.636 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/21 2026-04-01T10:28:53.636 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/21 2026-04-01T10:28:53.636 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 7/21 2026-04-01T10:28:53.636 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 8/21 2026-04-01T10:28:53.636 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/21 2026-04-01T10:28:53.636 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 10/21 2026-04-01T10:28:53.636 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : lmdb-libs-0.9.29-3.el9.x86_64 11/21 2026-04-01T10:28:53.636 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 12/21 2026-04-01T10:28:53.636 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 13/21 2026-04-01T10:28:53.636 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rados-2:20.2.0-8.g0597158282e.el9.clyso.x8 14/21 2026-04-01T10:28:53.636 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 15/21 2026-04-01T10:28:53.636 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : python3-rgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 16/21 2026-04-01T10:28:53.636 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : qemu-kvm-block-rbd-17:9.1.0-29.el9_7.6.x86_64 17/21 2026-04-01T10:28:53.636 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : rbd-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 18/21 2026-04-01T10:28:53.636 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : rbd-nbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 19/21 2026-04-01T10:28:53.636 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : re2-1:20211101-20.el9.x86_64 20/21 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 21/21 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout:Removed: 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: boost-program-options-1.75.0-13.el9_7.x86_64 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: libarrow-9.0.0-15.el9.x86_64 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: libnbd-1.20.3-4.el9.x86_64 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: lmdb-libs-0.9.29-3.el9.x86_64 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: python3-rados-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: python3-rbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: python3-rgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: qemu-kvm-block-rbd-17:9.1.0-29.el9_7.6.x86_64 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: rbd-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: rbd-nbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: re2-1:20211101-20.el9.x86_64 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: thrift-0.15.0-4.el9.x86_64 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout: 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm00.stdout: ceph-fuse x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 2.7 M 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm00.stdout: fuse x86_64 2.9.9-17.el9 @baseos 213 k 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:53.680 INFO:teuthology.orchestra.run.vm00.stdout:Remove 2 Packages 2026-04-01T10:28:53.681 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:53.681 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 2.9 M 2026-04-01T10:28:53.681 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-04-01T10:28:53.683 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-04-01T10:28:53.683 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-04-01T10:28:53.694 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-04-01T10:28:53.694 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-04-01T10:28:53.723 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-04-01T10:28:53.726 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/2 2026-04-01T10:28:53.740 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : fuse-2.9.9-17.el9.x86_64 2/2 2026-04-01T10:28:53.785 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-mgr-rook 2026-04-01T10:28:53.785 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-04-01T10:28:53.788 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:53.789 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-04-01T10:28:53.789 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:53.803 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: fuse-2.9.9-17.el9.x86_64 2/2 2026-04-01T10:28:53.803 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/2 2026-04-01T10:28:53.844 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : fuse-2.9.9-17.el9.x86_64 2/2 2026-04-01T10:28:53.844 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:53.844 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-04-01T10:28:53.844 INFO:teuthology.orchestra.run.vm00.stdout: ceph-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 fuse-2.9.9-17.el9.x86_64 2026-04-01T10:28:53.844 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:53.844 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:53.864 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: librbd1 2026-04-01T10:28:53.864 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-04-01T10:28:53.866 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:53.867 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-04-01T10:28:53.867 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:53.960 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-mgr-cephadm 2026-04-01T10:28:53.961 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-04-01T10:28:53.964 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:53.964 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-04-01T10:28:53.964 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:54.018 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: ceph-volume 2026-04-01T10:28:54.018 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-04-01T10:28:54.021 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:54.022 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-04-01T10:28:54.022 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:54.039 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: python3-rados 2026-04-01T10:28:54.039 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-04-01T10:28:54.042 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:54.043 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-04-01T10:28:54.043 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:54.148 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:54.148 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:54.148 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-04-01T10:28:54.148 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:54.148 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-04-01T10:28:54.148 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 2.7 M 2026-04-01T10:28:54.148 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-04-01T10:28:54.148 INFO:teuthology.orchestra.run.vm03.stdout: fuse x86_64 2.9.9-17.el9 @baseos 213 k 2026-04-01T10:28:54.148 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:54.148 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-04-01T10:28:54.148 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:54.148 INFO:teuthology.orchestra.run.vm03.stdout:Remove 2 Packages 2026-04-01T10:28:54.148 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:54.148 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 2.9 M 2026-04-01T10:28:54.149 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-04-01T10:28:54.150 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-04-01T10:28:54.150 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-04-01T10:28:54.162 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-04-01T10:28:54.162 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-04-01T10:28:54.193 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-04-01T10:28:54.196 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:54.197 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:54.197 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repo Size 2026-04-01T10:28:54.197 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:54.197 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-04-01T10:28:54.197 INFO:teuthology.orchestra.run.vm00.stdout: librados-devel x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 449 k 2026-04-01T10:28:54.197 INFO:teuthology.orchestra.run.vm00.stdout:Removing dependent packages: 2026-04-01T10:28:54.197 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-devel x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 154 k 2026-04-01T10:28:54.197 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:54.197 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-04-01T10:28:54.197 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:54.197 INFO:teuthology.orchestra.run.vm00.stdout:Remove 2 Packages 2026-04-01T10:28:54.197 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:54.197 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 603 k 2026-04-01T10:28:54.197 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-04-01T10:28:54.197 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/2 2026-04-01T10:28:54.199 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-04-01T10:28:54.199 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-04-01T10:28:54.210 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-04-01T10:28:54.211 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-04-01T10:28:54.212 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: python3-rgw 2026-04-01T10:28:54.212 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-04-01T10:28:54.212 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : fuse-2.9.9-17.el9.x86_64 2/2 2026-04-01T10:28:54.215 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:54.216 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-04-01T10:28:54.216 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:54.237 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-04-01T10:28:54.239 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libcephfs-devel-2:20.2.0-8.g0597158282e.el9.clyso.x8 1/2 2026-04-01T10:28:54.253 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : librados-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86 2/2 2026-04-01T10:28:54.275 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: fuse-2.9.9-17.el9.x86_64 2/2 2026-04-01T10:28:54.276 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/2 2026-04-01T10:28:54.308 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librados-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86 2/2 2026-04-01T10:28:54.308 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs-devel-2:20.2.0-8.g0597158282e.el9.clyso.x8 1/2 2026-04-01T10:28:54.316 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : fuse-2.9.9-17.el9.x86_64 2/2 2026-04-01T10:28:54.316 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:54.316 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-04-01T10:28:54.316 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 fuse-2.9.9-17.el9.x86_64 2026-04-01T10:28:54.316 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:54.316 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:54.345 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librados-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86 2/2 2026-04-01T10:28:54.345 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:54.345 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-04-01T10:28:54.345 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:54.345 INFO:teuthology.orchestra.run.vm00.stdout: librados-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:54.345 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:54.346 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:54.387 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: python3-cephfs 2026-04-01T10:28:54.388 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-04-01T10:28:54.390 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:54.391 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-04-01T10:28:54.391 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:54.492 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-volume 2026-04-01T10:28:54.492 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-04-01T10:28:54.496 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:54.496 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-04-01T10:28:54.496 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:54.531 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:54.531 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:54.532 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repo Size 2026-04-01T10:28:54.532 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:54.532 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-04-01T10:28:54.532 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 2.5 M 2026-04-01T10:28:54.532 INFO:teuthology.orchestra.run.vm00.stdout:Removing dependent packages: 2026-04-01T10:28:54.532 INFO:teuthology.orchestra.run.vm00.stdout: python3-cephfs x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 510 k 2026-04-01T10:28:54.532 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-04-01T10:28:54.532 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-daemon x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 90 k 2026-04-01T10:28:54.532 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-proxy2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 52 k 2026-04-01T10:28:54.532 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-argparse x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 187 k 2026-04-01T10:28:54.532 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:54.532 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-04-01T10:28:54.532 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:54.532 INFO:teuthology.orchestra.run.vm00.stdout:Remove 5 Packages 2026-04-01T10:28:54.532 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:54.532 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 3.3 M 2026-04-01T10:28:54.532 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-04-01T10:28:54.534 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-04-01T10:28:54.534 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-04-01T10:28:54.545 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-04-01T10:28:54.545 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-04-01T10:28:54.565 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: python3-rbd 2026-04-01T10:28:54.565 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-04-01T10:28:54.568 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:54.569 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-04-01T10:28:54.569 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:54.572 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-04-01T10:28:54.574 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cephfs-2:20.2.0-8.g0597158282e.el9.clyso.x86 1/5 2026-04-01T10:28:54.575 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-ceph-argparse-2:20.2.0-8.g0597158282e.el9.cl 2/5 2026-04-01T10:28:54.576 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.clyso.x 3/5 2026-04-01T10:28:54.588 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.clyso.x 3/5 2026-04-01T10:28:54.591 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libcephfs-daemon-2:20.2.0-8.g0597158282e.el9.clyso.x 4/5 2026-04-01T10:28:54.591 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 5/5 2026-04-01T10:28:54.658 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 5/5 2026-04-01T10:28:54.658 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs-daemon-2:20.2.0-8.g0597158282e.el9.clyso.x 1/5 2026-04-01T10:28:54.658 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.clyso.x 2/5 2026-04-01T10:28:54.658 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 3/5 2026-04-01T10:28:54.658 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ceph-argparse-2:20.2.0-8.g0597158282e.el9.cl 4/5 2026-04-01T10:28:54.693 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:54.693 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:54.693 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repo Size 2026-04-01T10:28:54.693 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:54.693 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-04-01T10:28:54.694 INFO:teuthology.orchestra.run.vm03.stdout: librados-devel x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 449 k 2026-04-01T10:28:54.694 INFO:teuthology.orchestra.run.vm03.stdout:Removing dependent packages: 2026-04-01T10:28:54.694 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-devel x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 154 k 2026-04-01T10:28:54.694 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:54.694 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-04-01T10:28:54.694 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:54.694 INFO:teuthology.orchestra.run.vm03.stdout:Remove 2 Packages 2026-04-01T10:28:54.694 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:54.694 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 603 k 2026-04-01T10:28:54.694 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-04-01T10:28:54.696 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-04-01T10:28:54.696 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-04-01T10:28:54.707 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cephfs-2:20.2.0-8.g0597158282e.el9.clyso.x86 5/5 2026-04-01T10:28:54.707 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:54.707 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-04-01T10:28:54.707 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-daemon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:54.707 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:54.707 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:54.707 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-argparse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:54.707 INFO:teuthology.orchestra.run.vm00.stdout: python3-cephfs-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:54.707 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:54.707 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:54.708 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-04-01T10:28:54.708 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-04-01T10:28:54.738 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-04-01T10:28:54.741 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libcephfs-devel-2:20.2.0-8.g0597158282e.el9.clyso.x8 1/2 2026-04-01T10:28:54.743 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: rbd-fuse 2026-04-01T10:28:54.744 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-04-01T10:28:54.747 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:54.747 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-04-01T10:28:54.747 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:54.758 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librados-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86 2/2 2026-04-01T10:28:54.818 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librados-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86 2/2 2026-04-01T10:28:54.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephfs-devel-2:20.2.0-8.g0597158282e.el9.clyso.x8 1/2 2026-04-01T10:28:54.896 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librados-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86 2/2 2026-04-01T10:28:54.896 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:54.896 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-04-01T10:28:54.896 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:54.896 INFO:teuthology.orchestra.run.vm03.stdout: librados-devel-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:54.896 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:54.896 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:54.896 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: libcephfs-devel 2026-04-01T10:28:54.896 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-04-01T10:28:54.900 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:54.900 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-04-01T10:28:54.900 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:54.914 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: rbd-mirror 2026-04-01T10:28:54.915 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-04-01T10:28:54.918 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:54.918 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-04-01T10:28:54.918 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:55.103 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: librados2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 12 M 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout:Removing dependent packages: 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: python3-rados x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 1.1 M 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: python3-rbd x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 1.1 M 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: python3-rgw x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 265 k 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: qemu-kvm-block-rbd x86_64 17:9.1.0-29.el9_7.6 @appstream 41 k 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: rbd-fuse x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 238 k 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: rbd-nbd x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 498 k 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: boost-program-options 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: x86_64 1.75.0-13.el9_7 @appstream 276 k 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 461 k 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: librbd1 x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 10 M 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: librgw2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 27 M 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: lmdb-libs x86_64 0.9.29-3.el9 @baseos 106 k 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout:Remove 21 Packages 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 84 M 2026-04-01T10:28:55.105 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-04-01T10:28:55.109 INFO:teuthology.orchestra.run.vm07.stdout:No match for argument: rbd-nbd 2026-04-01T10:28:55.109 INFO:teuthology.orchestra.run.vm07.stderr:No packages marked for removal. 2026-04-01T10:28:55.109 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-04-01T10:28:55.109 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-04-01T10:28:55.112 INFO:teuthology.orchestra.run.vm07.stdout:Dependencies resolved. 2026-04-01T10:28:55.113 INFO:teuthology.orchestra.run.vm07.stdout:Nothing to do. 2026-04-01T10:28:55.113 INFO:teuthology.orchestra.run.vm07.stdout:Complete! 2026-04-01T10:28:55.126 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-04-01T10:28:55.127 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-04-01T10:28:55.136 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:55.137 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:55.137 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repo Size 2026-04-01T10:28:55.137 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:55.137 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-04-01T10:28:55.137 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 2.5 M 2026-04-01T10:28:55.137 INFO:teuthology.orchestra.run.vm03.stdout:Removing dependent packages: 2026-04-01T10:28:55.137 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 510 k 2026-04-01T10:28:55.137 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-04-01T10:28:55.137 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-daemon x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 90 k 2026-04-01T10:28:55.137 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-proxy2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 52 k 2026-04-01T10:28:55.137 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 187 k 2026-04-01T10:28:55.137 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:55.137 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-04-01T10:28:55.137 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:55.137 INFO:teuthology.orchestra.run.vm03.stdout:Remove 5 Packages 2026-04-01T10:28:55.137 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:55.137 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 3.3 M 2026-04-01T10:28:55.137 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-04-01T10:28:55.139 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-04-01T10:28:55.139 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-04-01T10:28:55.148 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-04-01T10:28:55.149 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-04-01T10:28:55.178 DEBUG:teuthology.orchestra.run.vm07:> sudo yum clean all 2026-04-01T10:28:55.299 INFO:teuthology.orchestra.run.vm07.stdout:62 files removed 2026-04-01T10:28:55.317 DEBUG:teuthology.orchestra.run.vm07:> sudo rm /etc/yum.repos.d/ceph-source.repo 2026-04-01T10:28:55.327 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-04-01T10:28:55.338 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cephfs-2:20.2.0-8.g0597158282e.el9.clyso.x86 1/5 2026-04-01T10:28:55.339 DEBUG:teuthology.orchestra.run.vm07:> sudo rm /etc/yum.repos.d/ceph-noarch.repo 2026-04-01T10:28:55.344 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-ceph-argparse-2:20.2.0-8.g0597158282e.el9.cl 2/5 2026-04-01T10:28:55.344 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.clyso.x 3/5 2026-04-01T10:28:55.353 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-04-01T10:28:55.366 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : rbd-nbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/21 2026-04-01T10:28:55.368 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.clyso.x 3/5 2026-04-01T10:28:55.368 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : rbd-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2/21 2026-04-01T10:28:55.371 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libcephfs-daemon-2:20.2.0-8.g0597158282e.el9.clyso.x 4/5 2026-04-01T10:28:55.371 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 5/5 2026-04-01T10:28:55.371 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-rgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 3/21 2026-04-01T10:28:55.371 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 4/21 2026-04-01T10:28:55.384 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 4/21 2026-04-01T10:28:55.386 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/21 2026-04-01T10:28:55.388 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-rbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 6/21 2026-04-01T10:28:55.390 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-rados-2:20.2.0-8.g0597158282e.el9.clyso.x8 7/21 2026-04-01T10:28:55.391 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : qemu-kvm-block-rbd-17:9.1.0-29.el9_7.6.x86_64 8/21 2026-04-01T10:28:55.393 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/21 2026-04-01T10:28:55.393 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 10/21 2026-04-01T10:28:55.403 DEBUG:teuthology.orchestra.run.vm07:> sudo rm /etc/yum.repos.d/ceph.repo 2026-04-01T10:28:55.409 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 10/21 2026-04-01T10:28:55.409 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 11/21 2026-04-01T10:28:55.409 INFO:teuthology.orchestra.run.vm00.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-04-01T10:28:55.409 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:55.422 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 11/21 2026-04-01T10:28:55.425 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/21 2026-04-01T10:28:55.428 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/21 2026-04-01T10:28:55.432 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/21 2026-04-01T10:28:55.435 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/21 2026-04-01T10:28:55.436 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 5/5 2026-04-01T10:28:55.436 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephfs-daemon-2:20.2.0-8.g0597158282e.el9.clyso.x 1/5 2026-04-01T10:28:55.436 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.clyso.x 2/5 2026-04-01T10:28:55.436 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 3/5 2026-04-01T10:28:55.436 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ceph-argparse-2:20.2.0-8.g0597158282e.el9.cl 4/5 2026-04-01T10:28:55.437 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/21 2026-04-01T10:28:55.439 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/21 2026-04-01T10:28:55.441 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : boost-program-options-1.75.0-13.el9_7.x86_64 18/21 2026-04-01T10:28:55.443 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : lmdb-libs-0.9.29-3.el9.x86_64 19/21 2026-04-01T10:28:55.445 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 20/21 2026-04-01T10:28:55.462 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 21/21 2026-04-01T10:28:55.467 DEBUG:teuthology.orchestra.run.vm07:> sudo yum clean expire-cache 2026-04-01T10:28:55.501 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cephfs-2:20.2.0-8.g0597158282e.el9.clyso.x86 5/5 2026-04-01T10:28:55.501 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:55.501 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-04-01T10:28:55.501 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-daemon-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:55.501 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-proxy2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:55.501 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:55.501 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:55.501 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:55.501 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:55.501 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:55.542 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 21/21 2026-04-01T10:28:55.542 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : boost-program-options-1.75.0-13.el9_7.x86_64 1/21 2026-04-01T10:28:55.542 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/21 2026-04-01T10:28:55.542 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/21 2026-04-01T10:28:55.542 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/21 2026-04-01T10:28:55.542 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/21 2026-04-01T10:28:55.542 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/21 2026-04-01T10:28:55.542 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 7/21 2026-04-01T10:28:55.542 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 8/21 2026-04-01T10:28:55.542 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/21 2026-04-01T10:28:55.542 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 10/21 2026-04-01T10:28:55.542 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lmdb-libs-0.9.29-3.el9.x86_64 11/21 2026-04-01T10:28:55.542 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 12/21 2026-04-01T10:28:55.543 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 13/21 2026-04-01T10:28:55.543 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rados-2:20.2.0-8.g0597158282e.el9.clyso.x8 14/21 2026-04-01T10:28:55.543 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 15/21 2026-04-01T10:28:55.543 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 16/21 2026-04-01T10:28:55.543 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qemu-kvm-block-rbd-17:9.1.0-29.el9_7.6.x86_64 17/21 2026-04-01T10:28:55.543 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 18/21 2026-04-01T10:28:55.543 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-nbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 19/21 2026-04-01T10:28:55.543 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : re2-1:20211101-20.el9.x86_64 20/21 2026-04-01T10:28:55.619 INFO:teuthology.orchestra.run.vm07.stdout:Cache was expired 2026-04-01T10:28:55.619 INFO:teuthology.orchestra.run.vm07.stdout:0 files removed 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 21/21 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: boost-program-options-1.75.0-13.el9_7.x86_64 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-9.0.0-15.el9.x86_64 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: libnbd-1.20.3-4.el9.x86_64 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: lmdb-libs-0.9.29-3.el9.x86_64 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: python3-rados-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: python3-rbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: python3-rgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: qemu-kvm-block-rbd-17:9.1.0-29.el9_7.6.x86_64 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: rbd-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: rbd-nbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: re2-1:20211101-20.el9.x86_64 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: thrift-0.15.0-4.el9.x86_64 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout: 2026-04-01T10:28:55.642 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:55.643 DEBUG:teuthology.parallel:result is None 2026-04-01T10:28:55.708 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: libcephfs-devel 2026-04-01T10:28:55.708 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-04-01T10:28:55.711 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:55.712 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-04-01T10:28:55.712 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:55.810 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: librbd1 2026-04-01T10:28:55.810 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-04-01T10:28:55.813 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:55.814 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-04-01T10:28:55.814 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:55.896 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:55.897 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:55.897 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-04-01T10:28:55.897 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:55.897 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-04-01T10:28:55.897 INFO:teuthology.orchestra.run.vm03.stdout: librados2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 12 M 2026-04-01T10:28:55.897 INFO:teuthology.orchestra.run.vm03.stdout:Removing dependent packages: 2026-04-01T10:28:55.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-rados x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 1.1 M 2026-04-01T10:28:55.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 1.1 M 2026-04-01T10:28:55.897 INFO:teuthology.orchestra.run.vm03.stdout: python3-rgw x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 265 k 2026-04-01T10:28:55.897 INFO:teuthology.orchestra.run.vm03.stdout: qemu-kvm-block-rbd x86_64 17:9.1.0-29.el9_7.6 @appstream 41 k 2026-04-01T10:28:55.897 INFO:teuthology.orchestra.run.vm03.stdout: rbd-fuse x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 238 k 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout: rbd-nbd x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 498 k 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout: boost-program-options 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout: x86_64 1.75.0-13.el9_7 @appstream 276 k 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 461 k 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout: librbd1 x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 10 M 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout: librgw2 x86_64 2:20.2.0-8.g0597158282e.el9.clyso @ceph 27 M 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout: lmdb-libs x86_64 0.9.29-3.el9 @baseos 106 k 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout:Remove 21 Packages 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 84 M 2026-04-01T10:28:55.898 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-04-01T10:28:55.902 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-04-01T10:28:55.902 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-04-01T10:28:55.919 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-04-01T10:28:55.920 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-04-01T10:28:55.967 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-04-01T10:28:55.970 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : rbd-nbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 1/21 2026-04-01T10:28:55.972 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : rbd-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2/21 2026-04-01T10:28:55.975 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-rgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 3/21 2026-04-01T10:28:55.975 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 4/21 2026-04-01T10:28:55.987 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: python3-rados 2026-04-01T10:28:55.987 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-04-01T10:28:55.989 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 4/21 2026-04-01T10:28:55.991 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:55.991 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/21 2026-04-01T10:28:55.992 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-04-01T10:28:55.992 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:55.994 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-rbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 6/21 2026-04-01T10:28:55.995 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-rados-2:20.2.0-8.g0597158282e.el9.clyso.x8 7/21 2026-04-01T10:28:55.997 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : qemu-kvm-block-rbd-17:9.1.0-29.el9_7.6.x86_64 8/21 2026-04-01T10:28:56.000 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/21 2026-04-01T10:28:56.000 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 10/21 2026-04-01T10:28:56.016 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 10/21 2026-04-01T10:28:56.016 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 11/21 2026-04-01T10:28:56.016 INFO:teuthology.orchestra.run.vm03.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-04-01T10:28:56.016 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:56.030 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 11/21 2026-04-01T10:28:56.033 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/21 2026-04-01T10:28:56.036 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/21 2026-04-01T10:28:56.039 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/21 2026-04-01T10:28:56.042 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/21 2026-04-01T10:28:56.045 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/21 2026-04-01T10:28:56.048 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/21 2026-04-01T10:28:56.050 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : boost-program-options-1.75.0-13.el9_7.x86_64 18/21 2026-04-01T10:28:56.051 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : lmdb-libs-0.9.29-3.el9.x86_64 19/21 2026-04-01T10:28:56.054 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 20/21 2026-04-01T10:28:56.067 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 21/21 2026-04-01T10:28:56.127 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 21/21 2026-04-01T10:28:56.127 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : boost-program-options-1.75.0-13.el9_7.x86_64 1/21 2026-04-01T10:28:56.127 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/21 2026-04-01T10:28:56.127 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/21 2026-04-01T10:28:56.127 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/21 2026-04-01T10:28:56.128 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/21 2026-04-01T10:28:56.128 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/21 2026-04-01T10:28:56.128 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 7/21 2026-04-01T10:28:56.128 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 8/21 2026-04-01T10:28:56.128 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/21 2026-04-01T10:28:56.128 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 10/21 2026-04-01T10:28:56.128 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lmdb-libs-0.9.29-3.el9.x86_64 11/21 2026-04-01T10:28:56.128 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 12/21 2026-04-01T10:28:56.128 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 13/21 2026-04-01T10:28:56.128 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rados-2:20.2.0-8.g0597158282e.el9.clyso.x8 14/21 2026-04-01T10:28:56.128 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 15/21 2026-04-01T10:28:56.128 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_ 16/21 2026-04-01T10:28:56.128 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qemu-kvm-block-rbd-17:9.1.0-29.el9_7.6.x86_64 17/21 2026-04-01T10:28:56.128 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 18/21 2026-04-01T10:28:56.128 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-nbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 19/21 2026-04-01T10:28:56.128 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : re2-1:20211101-20.el9.x86_64 20/21 2026-04-01T10:28:56.157 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: python3-rgw 2026-04-01T10:28:56.157 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-04-01T10:28:56.160 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:56.161 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-04-01T10:28:56.161 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:56.170 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 21/21 2026-04-01T10:28:56.170 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:56.170 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-04-01T10:28:56.170 INFO:teuthology.orchestra.run.vm03.stdout: boost-program-options-1.75.0-13.el9_7.x86_64 2026-04-01T10:28:56.170 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-9.0.0-15.el9.x86_64 2026-04-01T10:28:56.170 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-04-01T10:28:56.170 INFO:teuthology.orchestra.run.vm03.stdout: libnbd-1.20.3-4.el9.x86_64 2026-04-01T10:28:56.170 INFO:teuthology.orchestra.run.vm03.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-04-01T10:28:56.171 INFO:teuthology.orchestra.run.vm03.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-04-01T10:28:56.171 INFO:teuthology.orchestra.run.vm03.stdout: librados2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:56.171 INFO:teuthology.orchestra.run.vm03.stdout: librbd1-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:56.171 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-04-01T10:28:56.171 INFO:teuthology.orchestra.run.vm03.stdout: librgw2-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:56.171 INFO:teuthology.orchestra.run.vm03.stdout: lmdb-libs-0.9.29-3.el9.x86_64 2026-04-01T10:28:56.171 INFO:teuthology.orchestra.run.vm03.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-04-01T10:28:56.171 INFO:teuthology.orchestra.run.vm03.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-04-01T10:28:56.171 INFO:teuthology.orchestra.run.vm03.stdout: python3-rados-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:56.171 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:56.171 INFO:teuthology.orchestra.run.vm03.stdout: python3-rgw-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:56.171 INFO:teuthology.orchestra.run.vm03.stdout: qemu-kvm-block-rbd-17:9.1.0-29.el9_7.6.x86_64 2026-04-01T10:28:56.171 INFO:teuthology.orchestra.run.vm03.stdout: rbd-fuse-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:56.171 INFO:teuthology.orchestra.run.vm03.stdout: rbd-nbd-2:20.2.0-8.g0597158282e.el9.clyso.x86_64 2026-04-01T10:28:56.171 INFO:teuthology.orchestra.run.vm03.stdout: re2-1:20211101-20.el9.x86_64 2026-04-01T10:28:56.171 INFO:teuthology.orchestra.run.vm03.stdout: thrift-0.15.0-4.el9.x86_64 2026-04-01T10:28:56.171 INFO:teuthology.orchestra.run.vm03.stdout: 2026-04-01T10:28:56.171 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:56.321 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: python3-cephfs 2026-04-01T10:28:56.321 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-04-01T10:28:56.325 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:56.325 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-04-01T10:28:56.325 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:56.352 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: librbd1 2026-04-01T10:28:56.353 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-04-01T10:28:56.356 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:56.356 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-04-01T10:28:56.356 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:56.489 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: python3-rbd 2026-04-01T10:28:56.489 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-04-01T10:28:56.493 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:56.493 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-04-01T10:28:56.493 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:56.526 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: python3-rados 2026-04-01T10:28:56.526 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-04-01T10:28:56.529 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:56.530 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-04-01T10:28:56.530 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:56.653 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: rbd-fuse 2026-04-01T10:28:56.653 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-04-01T10:28:56.657 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:56.657 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-04-01T10:28:56.657 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:56.699 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: python3-rgw 2026-04-01T10:28:56.699 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-04-01T10:28:56.702 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:56.702 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-04-01T10:28:56.703 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:56.814 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: rbd-mirror 2026-04-01T10:28:56.814 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-04-01T10:28:56.817 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:56.818 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-04-01T10:28:56.818 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:56.879 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: python3-cephfs 2026-04-01T10:28:56.880 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-04-01T10:28:56.882 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:56.883 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-04-01T10:28:56.883 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:56.977 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: rbd-nbd 2026-04-01T10:28:56.978 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-04-01T10:28:56.981 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-04-01T10:28:56.981 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-04-01T10:28:56.982 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-04-01T10:28:56.999 DEBUG:teuthology.orchestra.run.vm00:> sudo yum clean all 2026-04-01T10:28:57.043 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: python3-rbd 2026-04-01T10:28:57.043 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-04-01T10:28:57.046 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:57.046 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-04-01T10:28:57.046 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:57.119 INFO:teuthology.orchestra.run.vm00.stdout:62 files removed 2026-04-01T10:28:57.138 DEBUG:teuthology.orchestra.run.vm00:> sudo rm /etc/yum.repos.d/ceph-source.repo 2026-04-01T10:28:57.163 DEBUG:teuthology.orchestra.run.vm00:> sudo rm /etc/yum.repos.d/ceph-noarch.repo 2026-04-01T10:28:57.205 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: rbd-fuse 2026-04-01T10:28:57.205 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-04-01T10:28:57.208 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:57.209 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-04-01T10:28:57.209 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:57.228 DEBUG:teuthology.orchestra.run.vm00:> sudo rm /etc/yum.repos.d/ceph.repo 2026-04-01T10:28:57.293 DEBUG:teuthology.orchestra.run.vm00:> sudo yum clean expire-cache 2026-04-01T10:28:57.364 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: rbd-mirror 2026-04-01T10:28:57.364 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-04-01T10:28:57.367 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:57.368 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-04-01T10:28:57.368 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:57.445 INFO:teuthology.orchestra.run.vm00.stdout:Cache was expired 2026-04-01T10:28:57.445 INFO:teuthology.orchestra.run.vm00.stdout:0 files removed 2026-04-01T10:28:57.464 DEBUG:teuthology.parallel:result is None 2026-04-01T10:28:57.527 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: rbd-nbd 2026-04-01T10:28:57.527 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-04-01T10:28:57.530 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-04-01T10:28:57.530 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-04-01T10:28:57.530 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-04-01T10:28:57.553 DEBUG:teuthology.orchestra.run.vm03:> sudo yum clean all 2026-04-01T10:28:57.669 INFO:teuthology.orchestra.run.vm03.stdout:62 files removed 2026-04-01T10:28:57.691 DEBUG:teuthology.orchestra.run.vm03:> sudo rm /etc/yum.repos.d/ceph-source.repo 2026-04-01T10:28:57.713 DEBUG:teuthology.orchestra.run.vm03:> sudo rm /etc/yum.repos.d/ceph-noarch.repo 2026-04-01T10:28:57.776 DEBUG:teuthology.orchestra.run.vm03:> sudo rm /etc/yum.repos.d/ceph.repo 2026-04-01T10:28:57.838 DEBUG:teuthology.orchestra.run.vm03:> sudo yum clean expire-cache 2026-04-01T10:28:57.987 INFO:teuthology.orchestra.run.vm03.stdout:Cache was expired 2026-04-01T10:28:57.987 INFO:teuthology.orchestra.run.vm03.stdout:0 files removed 2026-04-01T10:28:58.007 DEBUG:teuthology.parallel:result is None 2026-04-01T10:28:58.007 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm00.local 2026-04-01T10:28:58.007 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm03.local 2026-04-01T10:28:58.007 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm07.local 2026-04-01T10:28:58.007 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-04-01T10:28:58.008 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-04-01T10:28:58.008 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-04-01T10:28:58.032 DEBUG:teuthology.orchestra.run.vm03:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-04-01T10:28:58.033 DEBUG:teuthology.orchestra.run.vm00:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-04-01T10:28:58.033 DEBUG:teuthology.orchestra.run.vm07:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-04-01T10:28:58.096 INFO:teuthology.orchestra.run.vm07.stderr:mv: cannot stat '/etc/yum/pluginconf.d/priorities.conf.orig': No such file or directory 2026-04-01T10:28:58.097 INFO:teuthology.orchestra.run.vm03.stderr:mv: cannot stat '/etc/yum/pluginconf.d/priorities.conf.orig': No such file or directory 2026-04-01T10:28:58.097 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-04-01T10:28:58.097 DEBUG:teuthology.parallel:result is None 2026-04-01T10:28:58.098 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-04-01T10:28:58.098 DEBUG:teuthology.parallel:result is None 2026-04-01T10:28:58.100 INFO:teuthology.orchestra.run.vm00.stderr:mv: cannot stat '/etc/yum/pluginconf.d/priorities.conf.orig': No such file or directory 2026-04-01T10:28:58.101 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-04-01T10:28:58.101 DEBUG:teuthology.parallel:result is None 2026-04-01T10:28:58.101 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-04-01T10:28:58.104 INFO:teuthology.task.clock:Checking final clock skew... 2026-04-01T10:28:58.104 DEBUG:teuthology.orchestra.run.vm00:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-04-01T10:28:58.143 DEBUG:teuthology.orchestra.run.vm03:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-04-01T10:28:58.145 DEBUG:teuthology.orchestra.run.vm07:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-04-01T10:28:58.158 INFO:teuthology.orchestra.run.vm00.stderr:bash: line 1: ntpq: command not found 2026-04-01T10:28:58.159 INFO:teuthology.orchestra.run.vm03.stderr:bash: line 1: ntpq: command not found 2026-04-01T10:28:58.161 INFO:teuthology.orchestra.run.vm07.stderr:bash: line 1: ntpq: command not found 2026-04-01T10:28:58.237 INFO:teuthology.orchestra.run.vm00.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-04-01T10:28:58.237 INFO:teuthology.orchestra.run.vm00.stdout:=============================================================================== 2026-04-01T10:28:58.237 INFO:teuthology.orchestra.run.vm00.stdout:^- 82.165.178.31 2 7 377 299 -448us[ -409us] +/- 42ms 2026-04-01T10:28:58.237 INFO:teuthology.orchestra.run.vm00.stdout:^* static.222.16.42.77.clie> 2 6 377 41 +5343ns[ +21us] +/- 2625us 2026-04-01T10:28:58.237 INFO:teuthology.orchestra.run.vm00.stdout:^- node-4.infogral.is 2 6 377 106 +20us[ +31us] +/- 15ms 2026-04-01T10:28:58.237 INFO:teuthology.orchestra.run.vm00.stdout:^- 172-104-149-161.ip.linod> 2 6 377 44 +5151us[+5166us] +/- 33ms 2026-04-01T10:28:58.237 INFO:teuthology.orchestra.run.vm03.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-04-01T10:28:58.237 INFO:teuthology.orchestra.run.vm03.stdout:=============================================================================== 2026-04-01T10:28:58.237 INFO:teuthology.orchestra.run.vm03.stdout:^- 82.165.178.31 2 8 377 238 -500us[ -463us] +/- 43ms 2026-04-01T10:28:58.237 INFO:teuthology.orchestra.run.vm03.stdout:^* static.222.16.42.77.clie> 2 6 377 38 -56us[ -66us] +/- 2715us 2026-04-01T10:28:58.237 INFO:teuthology.orchestra.run.vm03.stdout:^- node-4.infogral.is 2 7 377 44 +4320ns[-5309ns] +/- 15ms 2026-04-01T10:28:58.237 INFO:teuthology.orchestra.run.vm03.stdout:^- 172-104-149-161.ip.linod> 2 6 377 40 +5214us[+5204us] +/- 33ms 2026-04-01T10:28:58.238 INFO:teuthology.orchestra.run.vm07.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-04-01T10:28:58.239 INFO:teuthology.orchestra.run.vm07.stdout:=============================================================================== 2026-04-01T10:28:58.239 INFO:teuthology.orchestra.run.vm07.stdout:^- 172-104-149-161.ip.linod> 2 6 377 40 +5237us[+5237us] +/- 33ms 2026-04-01T10:28:58.239 INFO:teuthology.orchestra.run.vm07.stdout:^- 82.165.178.31 2 8 377 109 -355us[ -358us] +/- 45ms 2026-04-01T10:28:58.239 INFO:teuthology.orchestra.run.vm07.stdout:^* static.222.16.42.77.clie> 2 6 377 43 +31us[ +46us] +/- 2635us 2026-04-01T10:28:58.239 INFO:teuthology.orchestra.run.vm07.stdout:^- node-4.infogral.is 2 6 377 43 +25us[ +25us] +/- 15ms 2026-04-01T10:28:58.239 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-04-01T10:28:58.241 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-04-01T10:28:58.241 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-04-01T10:28:58.244 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-04-01T10:28:58.246 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-04-01T10:28:58.248 INFO:teuthology.task.internal:Duration was 2375.599692 seconds 2026-04-01T10:28:58.248 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-04-01T10:28:58.250 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-04-01T10:28:58.250 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-04-01T10:28:58.280 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-04-01T10:28:58.281 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-04-01T10:28:58.319 INFO:teuthology.orchestra.run.vm07.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-04-01T10:28:58.319 INFO:teuthology.orchestra.run.vm00.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-04-01T10:28:58.320 INFO:teuthology.orchestra.run.vm03.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-04-01T10:28:58.646 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-04-01T10:28:58.646 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm00.local 2026-04-01T10:28:58.646 DEBUG:teuthology.orchestra.run.vm00:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-04-01T10:28:58.702 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm03.local 2026-04-01T10:28:58.702 DEBUG:teuthology.orchestra.run.vm03:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-04-01T10:28:58.725 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm07.local 2026-04-01T10:28:58.725 DEBUG:teuthology.orchestra.run.vm07:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-04-01T10:28:58.746 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-04-01T10:28:58.746 DEBUG:teuthology.orchestra.run.vm00:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-04-01T10:28:58.747 DEBUG:teuthology.orchestra.run.vm03:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-04-01T10:28:58.767 DEBUG:teuthology.orchestra.run.vm07:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-04-01T10:28:59.201 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-04-01T10:28:59.201 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-04-01T10:28:59.202 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-04-01T10:28:59.204 DEBUG:teuthology.orchestra.run.vm07:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-04-01T10:28:59.226 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-04-01T10:28:59.226 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-04-01T10:28:59.227 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-04-01T10:28:59.227 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-04-01T10:28:59.227 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-04-01T10:28:59.227 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-04-01T10:28:59.227 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-04-01T10:28:59.227 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-04-01T10:28:59.227 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-04-01T10:28:59.228 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5/home/ubuntu/cephtest/archive/syslog/kern.log: --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-04-01T10:28:59.228 INFO:teuthology.orchestra.run.vm07.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-04-01T10:28:59.228 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-04-01T10:28:59.228 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-04-01T10:28:59.228 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-04-01T10:28:59.228 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-04-01T10:28:59.346 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 98.5% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-04-01T10:28:59.354 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 98.4% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-04-01T10:28:59.387 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 98.5% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-04-01T10:28:59.389 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-04-01T10:28:59.392 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-04-01T10:28:59.392 DEBUG:teuthology.orchestra.run.vm00:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-04-01T10:28:59.455 DEBUG:teuthology.orchestra.run.vm03:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-04-01T10:28:59.479 DEBUG:teuthology.orchestra.run.vm07:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-04-01T10:28:59.502 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-04-01T10:28:59.504 DEBUG:teuthology.orchestra.run.vm00:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-04-01T10:28:59.506 DEBUG:teuthology.orchestra.run.vm03:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-04-01T10:28:59.521 DEBUG:teuthology.orchestra.run.vm07:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-04-01T10:28:59.528 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = core 2026-04-01T10:28:59.545 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = core 2026-04-01T10:28:59.564 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern = core 2026-04-01T10:28:59.578 DEBUG:teuthology.orchestra.run.vm00:> test -e /home/ubuntu/cephtest/archive/coredump 2026-04-01T10:28:59.597 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-04-01T10:28:59.597 DEBUG:teuthology.orchestra.run.vm03:> test -e /home/ubuntu/cephtest/archive/coredump 2026-04-01T10:28:59.615 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-04-01T10:28:59.615 DEBUG:teuthology.orchestra.run.vm07:> test -e /home/ubuntu/cephtest/archive/coredump 2026-04-01T10:28:59.632 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-04-01T10:28:59.632 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-04-01T10:28:59.635 INFO:teuthology.task.internal:Transferring archived files... 2026-04-01T10:28:59.635 DEBUG:teuthology.misc:Transferring archived files from vm00:/home/ubuntu/cephtest/archive to /archive/supriti-2026-04-01_09:45:36-rgw-wip-sse-s3-on-v20.2.0-none-default-vps/4721/remote/vm00 2026-04-01T10:28:59.635 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-04-01T10:28:59.664 DEBUG:teuthology.misc:Transferring archived files from vm03:/home/ubuntu/cephtest/archive to /archive/supriti-2026-04-01_09:45:36-rgw-wip-sse-s3-on-v20.2.0-none-default-vps/4721/remote/vm03 2026-04-01T10:28:59.664 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-04-01T10:28:59.691 DEBUG:teuthology.misc:Transferring archived files from vm07:/home/ubuntu/cephtest/archive to /archive/supriti-2026-04-01_09:45:36-rgw-wip-sse-s3-on-v20.2.0-none-default-vps/4721/remote/vm07 2026-04-01T10:28:59.691 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-04-01T10:28:59.715 INFO:teuthology.task.internal:Removing archive directory... 2026-04-01T10:28:59.715 DEBUG:teuthology.orchestra.run.vm00:> rm -rf -- /home/ubuntu/cephtest/archive 2026-04-01T10:28:59.717 DEBUG:teuthology.orchestra.run.vm03:> rm -rf -- /home/ubuntu/cephtest/archive 2026-04-01T10:28:59.733 DEBUG:teuthology.orchestra.run.vm07:> rm -rf -- /home/ubuntu/cephtest/archive 2026-04-01T10:28:59.770 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-04-01T10:28:59.773 INFO:teuthology.task.internal:Not uploading archives. 2026-04-01T10:28:59.773 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-04-01T10:28:59.776 INFO:teuthology.task.internal:Tidying up after the test... 2026-04-01T10:28:59.776 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-04-01T10:28:59.777 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-04-01T10:28:59.789 DEBUG:teuthology.orchestra.run.vm07:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-04-01T10:28:59.791 INFO:teuthology.orchestra.run.vm00.stdout: 83886251 0 drwxr-xr-x 2 ubuntu ubuntu 6 Apr 1 10:28 /home/ubuntu/cephtest 2026-04-01T10:28:59.805 INFO:teuthology.orchestra.run.vm03.stdout: 83886506 0 drwxr-xr-x 2 ubuntu ubuntu 59 Apr 1 10:28 /home/ubuntu/cephtest 2026-04-01T10:28:59.805 INFO:teuthology.orchestra.run.vm03.stdout: 83890472 4 -rw-r--r-- 1 ceph root 20 Apr 1 09:53 /home/ubuntu/cephtest/url_file 2026-04-01T10:28:59.806 INFO:teuthology.orchestra.run.vm03.stdout: 83890473 0 srwxr-xr-x 1 root root 0 Apr 1 09:53 /home/ubuntu/cephtest/rgw.opslog.ceph.client.1.sock 2026-04-01T10:28:59.806 INFO:teuthology.orchestra.run.vm03.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-04-01T10:28:59.812 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-04-01T10:28:59.812 ERROR:teuthology.run_tasks:Manager failed: internal.base Traceback (most recent call last): File "/home/teuthos/kshtsk/teuthology/teuthology/task/internal/__init__.py", line 48, in base yield File "/home/teuthos/kshtsk/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/git.local_ceph_99e8bef8f767b591604d6078b7861a00c2936d53/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/remote.py", line 596, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/kshtsk/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/kshtsk/teuthology/teuthology/task/internal/__init__.py", line 53, in base run.wait( File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm03 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 2026-04-01T10:28:59.812 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-04-01T10:28:59.815 DEBUG:teuthology.run_tasks:Exception was not quenched, exiting: CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' 2026-04-01T10:28:59.816 INFO:teuthology.run:Summary data: description: rgw/dedup/{beast bluestore-bitmap fixed-3-rgw ignore-pg-availability overrides supported-distros/{rocky_latest} tasks/{0-install test_dedup}} duration: 2375.59969162941 failure_reason: 'Command failed on vm00 with status 1: ''adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph''' flavor: default owner: supriti sentry_event: null status: fail success: false 2026-04-01T10:28:59.816 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-04-01T10:28:59.826 INFO:teuthology.orchestra.run.vm07.stdout: 83886506 0 drwxr-xr-x 2 ubuntu ubuntu 59 Apr 1 10:28 /home/ubuntu/cephtest 2026-04-01T10:28:59.826 INFO:teuthology.orchestra.run.vm07.stdout: 83890527 4 -rw-r--r-- 1 ceph root 20 Apr 1 09:53 /home/ubuntu/cephtest/url_file 2026-04-01T10:28:59.826 INFO:teuthology.orchestra.run.vm07.stdout: 83890528 0 srwxr-xr-x 1 root root 0 Apr 1 09:53 /home/ubuntu/cephtest/rgw.opslog.ceph.client.2.sock 2026-04-01T10:28:59.827 INFO:teuthology.orchestra.run.vm07.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-04-01T10:28:59.844 INFO:teuthology.run:FAIL