2026-03-20T12:36:47.025 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-20T12:36:47.030 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-20T12:36:47.049 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-20_12:32:34-rgw-tentacle-none-default-vps/2137 branch: tentacle description: rgw/dedup/{beast bluestore-bitmap fixed-3-rgw ignore-pg-availability overrides supported-distros/{centos_latest} tasks/{0-install test_dedup}} email: null first_in_suite: false flavor: default job_id: '2137' last_in_suite: false machine_type: vps name: kyr-2026-03-20_12:32:34-rgw-tentacle-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: centos os_version: 9.stream overrides: admin_socket: branch: tentacle ansible.cephlab: branch: main repo: https://github.com/kshtsk/ceph-cm-ansible.git skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: logical_volumes: lv_1: scratch_dev: true size: 25%VG vg: vg_nvme lv_2: scratch_dev: true size: 25%VG vg: vg_nvme lv_3: scratch_dev: true size: 25%VG vg: vg_nvme lv_4: scratch_dev: true size: 25%VG vg: vg_nvme timezone: UTC volume_groups: vg_nvme: pvs: /dev/vdb,/dev/vdc,/dev/vdd,/dev/vde ceph: conf: client: debug rgw: 20 debug rgw dedup: 20 setgroup: ceph setuser: ceph global: osd_max_pg_log_entries: 10 osd_min_pg_log_entries: 10 mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: bdev async discard: true bdev enable discard: true bluestore allocator: bitmap bluestore block size: 96636764160 bluestore fsck on mount: true debug bluefs: 1/20 debug bluestore: 1/20 debug ms: 1 debug osd: 20 debug rocksdb: 4/10 mon osd backfillfull_ratio: 0.85 mon osd full ratio: 0.9 mon osd nearfull ratio: 0.8 osd failsafe full ratio: 0.95 osd mclock iops capacity threshold hdd: 49000 osd objectstore: bluestore osd shutdown pgref assert: true flavor: default fs: xfs log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - \(PG_AVAILABILITY\) - \(PG_DEGRADED\) - \(POOL_APP_NOT_ENABLED\) - not have an application enabled sha1: 70f8415b300f041766fa27faf7d5472699e32388 ceph-deploy: bluestore: true conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} osd: bdev async discard: true bdev enable discard: true bluestore block size: 96636764160 bluestore fsck on mount: true debug bluefs: 1/20 debug bluestore: 1/20 debug rocksdb: 4/10 mon osd backfillfull_ratio: 0.85 mon osd full ratio: 0.9 mon osd nearfull ratio: 0.8 osd failsafe full ratio: 0.95 osd objectstore: bluestore fs: xfs cephadm: cephadm_binary_url: https://download.ceph.com/rpm-20.2.0/el9/noarch/cephadm install: ceph: flavor: default sha1: 70f8415b300f041766fa27faf7d5472699e32388 extra_system_packages: deb: - python3-jmespath - python3-xmltodict - s3cmd rpm: - bzip2 - perl-Test-Harness - python3-jmespath - python3-xmltodict - s3cmd rgw: frontend: beast storage classes: FROZEN: null LUKEWARM: null thrashosds: bdev_inject_crash: 2 bdev_inject_crash_probability: 0.5 workunit: branch: tt-tentacle sha1: 200ab49823532903ca9be3870ca957b2093ed400 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mon.c - mgr.y - osd.0 - osd.1 - osd.2 - osd.3 - client.0 - - mon.b - mgr.x - osd.4 - osd.5 - osd.6 - osd.7 - client.1 - - client.2 seed: 9234 sha1: 70f8415b300f041766fa27faf7d5472699e32388 sleep_before_teardown: 0 suite: rgw suite_branch: tt-tentacle suite_path: /home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 200ab49823532903ca9be3870ca957b2093ed400 targets: vm00.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKrAQ2wALjNqRVwSitDTrwMbI2ae3qJpXamxI9dyPIIP/bthwD/JC3Bq4VeIKtmHSfTqu2jXJ3cEg/Fg3dT8IXI= vm06.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMZn+fAEzn0fqL1dQe1nMCXgSntAM8D9CmD/gV5Abdu/BmZ6UTkHjHK9viQHu8qrVAbYbrtuZFpJKKdr8DK5SRk= vm09.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMP0df182rq6IBJgcAFGlHAqNQW9wF5V8aAKvt4o5ioy1lGzCZoZimMEgVtMQC5xHdRgdbVGHnVH2pZjtVRYgt8= tasks: - install: null - ceph: null - openssl_keys: null - rgw: - client.0 - client.1 - client.2 - tox: - client.0 - tox: - client.0 - dedup-tests: client.0: rgw_server: client.0 teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-20_12:32:34 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.4188345 2026-03-20T12:36:47.049 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa; will attempt to use it 2026-03-20T12:36:47.049 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks 2026-03-20T12:36:47.049 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-20T12:36:47.050 INFO:teuthology.task.internal:Checking packages... 2026-03-20T12:36:47.050 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash '70f8415b300f041766fa27faf7d5472699e32388' 2026-03-20T12:36:47.050 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-20T12:36:47.050 INFO:teuthology.packaging:ref: None 2026-03-20T12:36:47.050 INFO:teuthology.packaging:tag: None 2026-03-20T12:36:47.050 INFO:teuthology.packaging:branch: tentacle 2026-03-20T12:36:47.050 INFO:teuthology.packaging:sha1: 70f8415b300f041766fa27faf7d5472699e32388 2026-03-20T12:36:47.050 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=tentacle 2026-03-20T12:36:47.818 INFO:teuthology.task.internal:Found packages for ceph version 20.2.0-721.g5bb32787 2026-03-20T12:36:47.819 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-20T12:36:47.820 INFO:teuthology.task.internal:no buildpackages task found 2026-03-20T12:36:47.820 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-20T12:36:47.820 INFO:teuthology.task.internal:Saving configuration 2026-03-20T12:36:47.825 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-20T12:36:47.826 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-20T12:36:47.832 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm00.local', 'description': '/archive/kyr-2026-03-20_12:32:34-rgw-tentacle-none-default-vps/2137', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-20 12:35:13.825761', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:00', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKrAQ2wALjNqRVwSitDTrwMbI2ae3qJpXamxI9dyPIIP/bthwD/JC3Bq4VeIKtmHSfTqu2jXJ3cEg/Fg3dT8IXI='} 2026-03-20T12:36:47.838 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm06.local', 'description': '/archive/kyr-2026-03-20_12:32:34-rgw-tentacle-none-default-vps/2137', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-20 12:35:13.825535', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:06', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMZn+fAEzn0fqL1dQe1nMCXgSntAM8D9CmD/gV5Abdu/BmZ6UTkHjHK9viQHu8qrVAbYbrtuZFpJKKdr8DK5SRk='} 2026-03-20T12:36:47.843 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm09.local', 'description': '/archive/kyr-2026-03-20_12:32:34-rgw-tentacle-none-default-vps/2137', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-20 12:35:13.825044', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:09', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMP0df182rq6IBJgcAFGlHAqNQW9wF5V8aAKvt4o5ioy1lGzCZoZimMEgVtMQC5xHdRgdbVGHnVH2pZjtVRYgt8='} 2026-03-20T12:36:47.843 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-20T12:36:47.844 INFO:teuthology.task.internal:roles: ubuntu@vm00.local - ['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0'] 2026-03-20T12:36:47.844 INFO:teuthology.task.internal:roles: ubuntu@vm06.local - ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1'] 2026-03-20T12:36:47.844 INFO:teuthology.task.internal:roles: ubuntu@vm09.local - ['client.2'] 2026-03-20T12:36:47.844 INFO:teuthology.run_tasks:Running task console_log... 2026-03-20T12:36:47.851 DEBUG:teuthology.task.console_log:vm00 does not support IPMI; excluding 2026-03-20T12:36:47.857 DEBUG:teuthology.task.console_log:vm06 does not support IPMI; excluding 2026-03-20T12:36:47.864 DEBUG:teuthology.task.console_log:vm09 does not support IPMI; excluding 2026-03-20T12:36:47.864 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f5ac9d1c5e0>, signals=[15]) 2026-03-20T12:36:47.864 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-20T12:36:47.865 INFO:teuthology.task.internal:Opening connections... 2026-03-20T12:36:47.865 DEBUG:teuthology.task.internal:connecting to ubuntu@vm00.local 2026-03-20T12:36:47.866 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-20T12:36:47.924 DEBUG:teuthology.task.internal:connecting to ubuntu@vm06.local 2026-03-20T12:36:47.925 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm06.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-20T12:36:47.984 DEBUG:teuthology.task.internal:connecting to ubuntu@vm09.local 2026-03-20T12:36:47.984 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-20T12:36:48.042 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-20T12:36:48.043 DEBUG:teuthology.orchestra.run.vm00:> uname -m 2026-03-20T12:36:48.058 INFO:teuthology.orchestra.run.vm00.stdout:x86_64 2026-03-20T12:36:48.058 DEBUG:teuthology.orchestra.run.vm00:> cat /etc/os-release 2026-03-20T12:36:48.113 INFO:teuthology.orchestra.run.vm00.stdout:NAME="CentOS Stream" 2026-03-20T12:36:48.113 INFO:teuthology.orchestra.run.vm00.stdout:VERSION="9" 2026-03-20T12:36:48.113 INFO:teuthology.orchestra.run.vm00.stdout:ID="centos" 2026-03-20T12:36:48.113 INFO:teuthology.orchestra.run.vm00.stdout:ID_LIKE="rhel fedora" 2026-03-20T12:36:48.113 INFO:teuthology.orchestra.run.vm00.stdout:VERSION_ID="9" 2026-03-20T12:36:48.113 INFO:teuthology.orchestra.run.vm00.stdout:PLATFORM_ID="platform:el9" 2026-03-20T12:36:48.113 INFO:teuthology.orchestra.run.vm00.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-20T12:36:48.113 INFO:teuthology.orchestra.run.vm00.stdout:ANSI_COLOR="0;31" 2026-03-20T12:36:48.113 INFO:teuthology.orchestra.run.vm00.stdout:LOGO="fedora-logo-icon" 2026-03-20T12:36:48.113 INFO:teuthology.orchestra.run.vm00.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-20T12:36:48.113 INFO:teuthology.orchestra.run.vm00.stdout:HOME_URL="https://centos.org/" 2026-03-20T12:36:48.113 INFO:teuthology.orchestra.run.vm00.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-20T12:36:48.113 INFO:teuthology.orchestra.run.vm00.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-20T12:36:48.113 INFO:teuthology.orchestra.run.vm00.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-20T12:36:48.113 INFO:teuthology.lock.ops:Updating vm00.local on lock server 2026-03-20T12:36:48.118 DEBUG:teuthology.orchestra.run.vm06:> uname -m 2026-03-20T12:36:48.134 INFO:teuthology.orchestra.run.vm06.stdout:x86_64 2026-03-20T12:36:48.134 DEBUG:teuthology.orchestra.run.vm06:> cat /etc/os-release 2026-03-20T12:36:48.187 INFO:teuthology.orchestra.run.vm06.stdout:NAME="CentOS Stream" 2026-03-20T12:36:48.187 INFO:teuthology.orchestra.run.vm06.stdout:VERSION="9" 2026-03-20T12:36:48.187 INFO:teuthology.orchestra.run.vm06.stdout:ID="centos" 2026-03-20T12:36:48.187 INFO:teuthology.orchestra.run.vm06.stdout:ID_LIKE="rhel fedora" 2026-03-20T12:36:48.187 INFO:teuthology.orchestra.run.vm06.stdout:VERSION_ID="9" 2026-03-20T12:36:48.187 INFO:teuthology.orchestra.run.vm06.stdout:PLATFORM_ID="platform:el9" 2026-03-20T12:36:48.187 INFO:teuthology.orchestra.run.vm06.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-20T12:36:48.188 INFO:teuthology.orchestra.run.vm06.stdout:ANSI_COLOR="0;31" 2026-03-20T12:36:48.188 INFO:teuthology.orchestra.run.vm06.stdout:LOGO="fedora-logo-icon" 2026-03-20T12:36:48.188 INFO:teuthology.orchestra.run.vm06.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-20T12:36:48.188 INFO:teuthology.orchestra.run.vm06.stdout:HOME_URL="https://centos.org/" 2026-03-20T12:36:48.188 INFO:teuthology.orchestra.run.vm06.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-20T12:36:48.188 INFO:teuthology.orchestra.run.vm06.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-20T12:36:48.188 INFO:teuthology.orchestra.run.vm06.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-20T12:36:48.188 INFO:teuthology.lock.ops:Updating vm06.local on lock server 2026-03-20T12:36:48.192 DEBUG:teuthology.orchestra.run.vm09:> uname -m 2026-03-20T12:36:48.209 INFO:teuthology.orchestra.run.vm09.stdout:x86_64 2026-03-20T12:36:48.209 DEBUG:teuthology.orchestra.run.vm09:> cat /etc/os-release 2026-03-20T12:36:48.264 INFO:teuthology.orchestra.run.vm09.stdout:NAME="CentOS Stream" 2026-03-20T12:36:48.264 INFO:teuthology.orchestra.run.vm09.stdout:VERSION="9" 2026-03-20T12:36:48.264 INFO:teuthology.orchestra.run.vm09.stdout:ID="centos" 2026-03-20T12:36:48.264 INFO:teuthology.orchestra.run.vm09.stdout:ID_LIKE="rhel fedora" 2026-03-20T12:36:48.264 INFO:teuthology.orchestra.run.vm09.stdout:VERSION_ID="9" 2026-03-20T12:36:48.264 INFO:teuthology.orchestra.run.vm09.stdout:PLATFORM_ID="platform:el9" 2026-03-20T12:36:48.264 INFO:teuthology.orchestra.run.vm09.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-20T12:36:48.265 INFO:teuthology.orchestra.run.vm09.stdout:ANSI_COLOR="0;31" 2026-03-20T12:36:48.265 INFO:teuthology.orchestra.run.vm09.stdout:LOGO="fedora-logo-icon" 2026-03-20T12:36:48.265 INFO:teuthology.orchestra.run.vm09.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-20T12:36:48.265 INFO:teuthology.orchestra.run.vm09.stdout:HOME_URL="https://centos.org/" 2026-03-20T12:36:48.265 INFO:teuthology.orchestra.run.vm09.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-20T12:36:48.265 INFO:teuthology.orchestra.run.vm09.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-20T12:36:48.265 INFO:teuthology.orchestra.run.vm09.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-20T12:36:48.265 INFO:teuthology.lock.ops:Updating vm09.local on lock server 2026-03-20T12:36:48.269 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-20T12:36:48.271 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-20T12:36:48.272 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-20T12:36:48.272 DEBUG:teuthology.orchestra.run.vm00:> test '!' -e /home/ubuntu/cephtest 2026-03-20T12:36:48.274 DEBUG:teuthology.orchestra.run.vm06:> test '!' -e /home/ubuntu/cephtest 2026-03-20T12:36:48.276 DEBUG:teuthology.orchestra.run.vm09:> test '!' -e /home/ubuntu/cephtest 2026-03-20T12:36:48.319 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-20T12:36:48.320 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-20T12:36:48.320 DEBUG:teuthology.orchestra.run.vm00:> test -z $(ls -A /var/lib/ceph) 2026-03-20T12:36:48.328 DEBUG:teuthology.orchestra.run.vm06:> test -z $(ls -A /var/lib/ceph) 2026-03-20T12:36:48.330 DEBUG:teuthology.orchestra.run.vm09:> test -z $(ls -A /var/lib/ceph) 2026-03-20T12:36:48.342 INFO:teuthology.orchestra.run.vm00.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-20T12:36:48.343 INFO:teuthology.orchestra.run.vm06.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-20T12:36:48.374 INFO:teuthology.orchestra.run.vm09.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-20T12:36:48.374 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-20T12:36:48.382 DEBUG:teuthology.orchestra.run.vm00:> test -e /ceph-qa-ready 2026-03-20T12:36:48.396 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T12:36:48.592 DEBUG:teuthology.orchestra.run.vm06:> test -e /ceph-qa-ready 2026-03-20T12:36:48.609 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T12:36:48.807 DEBUG:teuthology.orchestra.run.vm09:> test -e /ceph-qa-ready 2026-03-20T12:36:48.822 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T12:36:49.011 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-20T12:36:49.012 INFO:teuthology.task.internal:Creating test directory... 2026-03-20T12:36:49.012 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-20T12:36:49.014 DEBUG:teuthology.orchestra.run.vm06:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-20T12:36:49.016 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-20T12:36:49.036 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-20T12:36:49.037 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-20T12:36:49.038 INFO:teuthology.task.internal:Creating archive directory... 2026-03-20T12:36:49.038 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-20T12:36:49.072 DEBUG:teuthology.orchestra.run.vm06:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-20T12:36:49.073 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-20T12:36:49.096 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-20T12:36:49.097 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-20T12:36:49.097 DEBUG:teuthology.orchestra.run.vm00:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-20T12:36:49.146 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T12:36:49.146 DEBUG:teuthology.orchestra.run.vm06:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-20T12:36:49.161 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T12:36:49.162 DEBUG:teuthology.orchestra.run.vm09:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-20T12:36:49.178 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T12:36:49.179 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-20T12:36:49.188 DEBUG:teuthology.orchestra.run.vm06:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-20T12:36:49.203 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-20T12:36:49.212 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-20T12:36:49.223 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-20T12:36:49.228 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-20T12:36:49.238 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-20T12:36:49.245 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-20T12:36:49.255 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-20T12:36:49.257 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-20T12:36:49.258 INFO:teuthology.task.internal:Configuring sudo... 2026-03-20T12:36:49.258 DEBUG:teuthology.orchestra.run.vm00:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-20T12:36:49.266 DEBUG:teuthology.orchestra.run.vm06:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-20T12:36:49.281 DEBUG:teuthology.orchestra.run.vm09:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-20T12:36:49.323 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-20T12:36:49.326 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-20T12:36:49.326 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-20T12:36:49.332 DEBUG:teuthology.orchestra.run.vm06:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-20T12:36:49.348 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-20T12:36:49.380 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-20T12:36:49.416 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-20T12:36:49.474 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:36:49.474 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-20T12:36:49.537 DEBUG:teuthology.orchestra.run.vm06:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-20T12:36:49.560 DEBUG:teuthology.orchestra.run.vm06:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-20T12:36:49.614 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-20T12:36:49.615 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-20T12:36:49.671 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-20T12:36:49.694 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-20T12:36:49.758 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-20T12:36:49.758 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-20T12:36:49.818 DEBUG:teuthology.orchestra.run.vm00:> sudo service rsyslog restart 2026-03-20T12:36:49.820 DEBUG:teuthology.orchestra.run.vm06:> sudo service rsyslog restart 2026-03-20T12:36:49.822 DEBUG:teuthology.orchestra.run.vm09:> sudo service rsyslog restart 2026-03-20T12:36:49.846 INFO:teuthology.orchestra.run.vm06.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-20T12:36:49.846 INFO:teuthology.orchestra.run.vm00.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-20T12:36:49.887 INFO:teuthology.orchestra.run.vm09.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-20T12:36:50.295 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-20T12:36:50.304 INFO:teuthology.task.internal:Starting timer... 2026-03-20T12:36:50.304 INFO:teuthology.run_tasks:Running task pcp... 2026-03-20T12:36:50.326 INFO:teuthology.run_tasks:Running task selinux... 2026-03-20T12:36:50.328 INFO:teuthology.task.selinux:Excluding vm00: VMs are not yet supported 2026-03-20T12:36:50.328 INFO:teuthology.task.selinux:Excluding vm06: VMs are not yet supported 2026-03-20T12:36:50.328 INFO:teuthology.task.selinux:Excluding vm09: VMs are not yet supported 2026-03-20T12:36:50.328 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-20T12:36:50.328 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-20T12:36:50.328 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-20T12:36:50.328 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-20T12:36:50.330 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'repo': 'https://github.com/kshtsk/ceph-cm-ansible.git', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'logical_volumes': {'lv_1': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_2': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_3': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_4': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}}, 'timezone': 'UTC', 'volume_groups': {'vg_nvme': {'pvs': '/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde'}}}} 2026-03-20T12:36:50.330 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/kshtsk/ceph-cm-ansible.git 2026-03-20T12:36:50.331 INFO:teuthology.repo_utils:Fetching github.com_kshtsk_ceph-cm-ansible_main from origin 2026-03-20T12:36:50.842 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_kshtsk_ceph-cm-ansible_main to origin/main 2026-03-20T12:36:50.847 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-20T12:36:50.847 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "logical_volumes": {"lv_1": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_2": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_3": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_4": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}}, "timezone": "UTC", "volume_groups": {"vg_nvme": {"pvs": "/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde"}}}' -i /tmp/teuth_ansible_inventory75kvi4s3 --limit vm00.local,vm06.local,vm09.local /home/teuthos/src/github.com_kshtsk_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-20T12:38:42.719 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm00.local'), Remote(name='ubuntu@vm06.local'), Remote(name='ubuntu@vm09.local')] 2026-03-20T12:38:42.719 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm00.local' 2026-03-20T12:38:42.719 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-20T12:38:42.781 DEBUG:teuthology.orchestra.run.vm00:> true 2026-03-20T12:38:42.870 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm00.local' 2026-03-20T12:38:42.870 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm06.local' 2026-03-20T12:38:42.870 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm06.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-20T12:38:42.933 DEBUG:teuthology.orchestra.run.vm06:> true 2026-03-20T12:38:43.020 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm06.local' 2026-03-20T12:38:43.021 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm09.local' 2026-03-20T12:38:43.021 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-20T12:38:43.083 DEBUG:teuthology.orchestra.run.vm09:> true 2026-03-20T12:38:43.169 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm09.local' 2026-03-20T12:38:43.169 INFO:teuthology.run_tasks:Running task clock... 2026-03-20T12:38:43.172 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-20T12:38:43.172 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-20T12:38:43.172 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-20T12:38:43.174 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-20T12:38:43.174 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-20T12:38:43.176 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-20T12:38:43.176 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-20T12:38:43.205 INFO:teuthology.orchestra.run.vm00.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-20T12:38:43.207 INFO:teuthology.orchestra.run.vm06.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-20T12:38:43.222 INFO:teuthology.orchestra.run.vm00.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-20T12:38:43.225 INFO:teuthology.orchestra.run.vm06.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-20T12:38:43.241 INFO:teuthology.orchestra.run.vm09.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-20T12:38:43.253 INFO:teuthology.orchestra.run.vm00.stderr:sudo: ntpd: command not found 2026-03-20T12:38:43.258 INFO:teuthology.orchestra.run.vm06.stderr:sudo: ntpd: command not found 2026-03-20T12:38:43.260 INFO:teuthology.orchestra.run.vm09.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-20T12:38:43.267 INFO:teuthology.orchestra.run.vm00.stdout:506 Cannot talk to daemon 2026-03-20T12:38:43.272 INFO:teuthology.orchestra.run.vm06.stdout:506 Cannot talk to daemon 2026-03-20T12:38:43.284 INFO:teuthology.orchestra.run.vm00.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-20T12:38:43.291 INFO:teuthology.orchestra.run.vm06.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-20T12:38:43.292 INFO:teuthology.orchestra.run.vm09.stderr:sudo: ntpd: command not found 2026-03-20T12:38:43.302 INFO:teuthology.orchestra.run.vm00.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-20T12:38:43.306 INFO:teuthology.orchestra.run.vm09.stdout:506 Cannot talk to daemon 2026-03-20T12:38:43.308 INFO:teuthology.orchestra.run.vm06.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-20T12:38:43.325 INFO:teuthology.orchestra.run.vm09.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-20T12:38:43.344 INFO:teuthology.orchestra.run.vm09.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-20T12:38:43.352 INFO:teuthology.orchestra.run.vm00.stderr:bash: line 1: ntpq: command not found 2026-03-20T12:38:43.354 INFO:teuthology.orchestra.run.vm00.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-20T12:38:43.354 INFO:teuthology.orchestra.run.vm00.stdout:=============================================================================== 2026-03-20T12:38:43.360 INFO:teuthology.orchestra.run.vm06.stderr:bash: line 1: ntpq: command not found 2026-03-20T12:38:43.362 INFO:teuthology.orchestra.run.vm06.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-20T12:38:43.362 INFO:teuthology.orchestra.run.vm06.stdout:=============================================================================== 2026-03-20T12:38:43.396 INFO:teuthology.orchestra.run.vm09.stderr:bash: line 1: ntpq: command not found 2026-03-20T12:38:43.399 INFO:teuthology.orchestra.run.vm09.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-20T12:38:43.399 INFO:teuthology.orchestra.run.vm09.stdout:=============================================================================== 2026-03-20T12:38:43.400 INFO:teuthology.run_tasks:Running task install... 2026-03-20T12:38:43.402 DEBUG:teuthology.task.install:project ceph 2026-03-20T12:38:43.402 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': '70f8415b300f041766fa27faf7d5472699e32388'}, 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}} 2026-03-20T12:38:43.403 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': '70f8415b300f041766fa27faf7d5472699e32388', 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}} 2026-03-20T12:38:43.403 INFO:teuthology.task.install:Using flavor: default 2026-03-20T12:38:43.405 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-20T12:38:43.405 INFO:teuthology.task.install:extra packages: [] 2026-03-20T12:38:43.405 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': '70f8415b300f041766fa27faf7d5472699e32388', 'tag': None, 'wait_for_package': False} 2026-03-20T12:38:43.405 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-20T12:38:43.406 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': '70f8415b300f041766fa27faf7d5472699e32388', 'tag': None, 'wait_for_package': False} 2026-03-20T12:38:43.406 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-20T12:38:43.406 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': '70f8415b300f041766fa27faf7d5472699e32388', 'tag': None, 'wait_for_package': False} 2026-03-20T12:38:43.406 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-20T12:38:44.019 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/ 2026-03-20T12:38:44.020 INFO:teuthology.task.install.rpm:Package version is 20.2.0-712.g70f8415b 2026-03-20T12:38:44.032 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/ 2026-03-20T12:38:44.032 INFO:teuthology.task.install.rpm:Package version is 20.2.0-712.g70f8415b 2026-03-20T12:38:44.514 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-20T12:38:44.514 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:38:44.514 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-20T12:38:44.540 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-20T12:38:44.540 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-20T12:38:44.540 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-20T12:38:44.546 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-jmespath, python3-xmltodict, s3cmd on remote rpm x86_64 2026-03-20T12:38:44.546 DEBUG:teuthology.orchestra.run.vm00:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/70f8415b300f041766fa27faf7d5472699e32388/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-20T12:38:44.568 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-jmespath, python3-xmltodict, s3cmd on remote rpm x86_64 2026-03-20T12:38:44.569 DEBUG:teuthology.orchestra.run.vm06:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/70f8415b300f041766fa27faf7d5472699e32388/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-20T12:38:44.618 DEBUG:teuthology.orchestra.run.vm00:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-20T12:38:44.639 DEBUG:teuthology.orchestra.run.vm06:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-20T12:38:44.701 DEBUG:teuthology.orchestra.run.vm00:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-20T12:38:44.726 DEBUG:teuthology.orchestra.run.vm06:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-20T12:38:44.733 INFO:teuthology.orchestra.run.vm00.stdout:check_obsoletes = 1 2026-03-20T12:38:44.735 DEBUG:teuthology.orchestra.run.vm00:> sudo yum clean all 2026-03-20T12:38:44.753 INFO:teuthology.orchestra.run.vm06.stdout:check_obsoletes = 1 2026-03-20T12:38:44.754 DEBUG:teuthology.orchestra.run.vm06:> sudo yum clean all 2026-03-20T12:38:44.914 INFO:teuthology.orchestra.run.vm00.stdout:41 files removed 2026-03-20T12:38:44.936 INFO:teuthology.orchestra.run.vm06.stdout:41 files removed 2026-03-20T12:38:44.938 DEBUG:teuthology.orchestra.run.vm00:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-jmespath python3-xmltodict s3cmd 2026-03-20T12:38:44.961 DEBUG:teuthology.orchestra.run.vm06:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-jmespath python3-xmltodict s3cmd 2026-03-20T12:38:45.083 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/ 2026-03-20T12:38:45.083 INFO:teuthology.task.install.rpm:Package version is 20.2.0-712.g70f8415b 2026-03-20T12:38:45.577 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-20T12:38:45.577 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-20T12:38:45.577 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-20T12:38:45.603 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-jmespath, python3-xmltodict, s3cmd on remote rpm x86_64 2026-03-20T12:38:45.603 DEBUG:teuthology.orchestra.run.vm09:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/70f8415b300f041766fa27faf7d5472699e32388/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-20T12:38:45.669 DEBUG:teuthology.orchestra.run.vm09:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-20T12:38:45.751 DEBUG:teuthology.orchestra.run.vm09:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-20T12:38:45.813 INFO:teuthology.orchestra.run.vm09.stdout:check_obsoletes = 1 2026-03-20T12:38:45.815 DEBUG:teuthology.orchestra.run.vm09:> sudo yum clean all 2026-03-20T12:38:45.991 INFO:teuthology.orchestra.run.vm09.stdout:41 files removed 2026-03-20T12:38:46.014 DEBUG:teuthology.orchestra.run.vm09:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-jmespath python3-xmltodict s3cmd 2026-03-20T12:38:46.274 INFO:teuthology.orchestra.run.vm00.stdout:ceph packages for x86_64 76 kB/s | 87 kB 00:01 2026-03-20T12:38:46.294 INFO:teuthology.orchestra.run.vm06.stdout:ceph packages for x86_64 75 kB/s | 87 kB 00:01 2026-03-20T12:38:47.367 INFO:teuthology.orchestra.run.vm09.stdout:ceph packages for x86_64 74 kB/s | 87 kB 00:01 2026-03-20T12:38:47.379 INFO:teuthology.orchestra.run.vm00.stdout:ceph noarch packages 17 kB/s | 18 kB 00:01 2026-03-20T12:38:47.382 INFO:teuthology.orchestra.run.vm06.stdout:ceph noarch packages 17 kB/s | 18 kB 00:01 2026-03-20T12:38:48.313 INFO:teuthology.orchestra.run.vm00.stdout:ceph source packages 2.1 kB/s | 1.9 kB 00:00 2026-03-20T12:38:48.313 INFO:teuthology.orchestra.run.vm06.stdout:ceph source packages 2.1 kB/s | 1.9 kB 00:00 2026-03-20T12:38:48.461 INFO:teuthology.orchestra.run.vm09.stdout:ceph noarch packages 17 kB/s | 18 kB 00:01 2026-03-20T12:38:48.608 INFO:teuthology.orchestra.run.vm00.stdout:CentOS Stream 9 - BaseOS 32 MB/s | 8.9 MB 00:00 2026-03-20T12:38:49.402 INFO:teuthology.orchestra.run.vm09.stdout:ceph source packages 2.1 kB/s | 1.9 kB 00:00 2026-03-20T12:38:50.372 INFO:teuthology.orchestra.run.vm06.stdout:CentOS Stream 9 - BaseOS 4.4 MB/s | 8.9 MB 00:02 2026-03-20T12:38:51.267 INFO:teuthology.orchestra.run.vm00.stdout:CentOS Stream 9 - AppStream 13 MB/s | 27 MB 00:02 2026-03-20T12:38:53.430 INFO:teuthology.orchestra.run.vm06.stdout:CentOS Stream 9 - AppStream 11 MB/s | 27 MB 00:02 2026-03-20T12:38:55.449 INFO:teuthology.orchestra.run.vm00.stdout:CentOS Stream 9 - CRB 5.9 MB/s | 8.0 MB 00:01 2026-03-20T12:38:56.034 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - BaseOS 1.3 MB/s | 8.9 MB 00:06 2026-03-20T12:38:56.874 INFO:teuthology.orchestra.run.vm00.stdout:CentOS Stream 9 - Extras packages 34 kB/s | 20 kB 00:00 2026-03-20T12:38:57.648 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - AppStream 27 MB/s | 27 MB 00:01 2026-03-20T12:38:57.943 INFO:teuthology.orchestra.run.vm00.stdout:Extra Packages for Enterprise Linux 21 MB/s | 20 MB 00:00 2026-03-20T12:39:01.059 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - CRB 15 MB/s | 8.0 MB 00:00 2026-03-20T12:39:02.634 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - Extras packages 30 kB/s | 20 kB 00:00 2026-03-20T12:39:02.719 INFO:teuthology.orchestra.run.vm00.stdout:lab-extras 64 kB/s | 50 kB 00:00 2026-03-20T12:39:03.360 INFO:teuthology.orchestra.run.vm06.stdout:CentOS Stream 9 - CRB 1.1 MB/s | 8.0 MB 00:07 2026-03-20T12:39:03.600 INFO:teuthology.orchestra.run.vm09.stdout:Extra Packages for Enterprise Linux 23 MB/s | 20 MB 00:00 2026-03-20T12:39:04.179 INFO:teuthology.orchestra.run.vm00.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-20T12:39:04.180 INFO:teuthology.orchestra.run.vm00.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-20T12:39:04.218 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-20T12:39:04.222 INFO:teuthology.orchestra.run.vm00.stdout:====================================================================================== 2026-03-20T12:39:04.222 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout:====================================================================================== 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout:Installing: 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: bzip2 x86_64 1.0.8-11.el9 baseos 55 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ceph x86_64 2:20.2.0-712.g70f8415b.el9 ceph 6.5 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base x86_64 2:20.2.0-712.g70f8415b.el9 ceph 5.9 M 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ceph-fuse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 939 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ceph-immutable-object-cache x86_64 2:20.2.0-712.g70f8415b.el9 ceph 154 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr x86_64 2:20.2.0-712.g70f8415b.el9 ceph 962 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 173 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 11 M 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-diskprediction-local noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 7.4 M 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-rook noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 50 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ceph-radosgw x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 M 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ceph-test x86_64 2:20.2.0-712.g70f8415b.el9 ceph 84 M 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ceph-volume noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 298 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: cephadm noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 1.0 M 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-devel x86_64 2:20.2.0-712.g70f8415b.el9 ceph 34 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 866 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: librados-devel x86_64 2:20.2.0-712.g70f8415b.el9 ceph 126 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: perl-Test-Harness noarch 1:3.42-461.el9 appstream 295 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: python3-cephfs x86_64 2:20.2.0-712.g70f8415b.el9 ceph 163 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: python3-rados x86_64 2:20.2.0-712.g70f8415b.el9 ceph 324 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: python3-rbd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 304 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: python3-rgw x86_64 2:20.2.0-712.g70f8415b.el9 ceph 99 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: rbd-fuse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 91 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: rbd-mirror x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.9 M 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: rbd-nbd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 180 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: s3cmd noarch 2.4.0-1.el9 epel 206 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout:Upgrading: 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: librados2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 3.5 M 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: librbd1 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.8 M 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout:Installing dependencies: 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 M 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ceph-grafana-dashboards noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 43 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.3 M 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 290 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon x86_64 2:20.2.0-712.g70f8415b.el9 ceph 5.0 M 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ceph-osd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 17 M 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ceph-prometheus-alerts noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 17 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ceph-selinux x86_64 2:20.2.0-712.g70f8415b.el9 ceph 25 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: fuse x86_64 2.9.9-17.el9 baseos 80 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-20T12:39:04.223 INFO:teuthology.orchestra.run.vm00.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-proxy2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: libcephsqlite x86_64 2:20.2.0-712.g70f8415b.el9 ceph 164 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 250 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: librgw2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 6.4 M 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: perl-Benchmark noarch 1.23-483.el9 appstream 26 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-argparse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 45 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 ceph 175 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-20T12:39:04.224 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout:Installing weak dependencies: 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-20T12:39:04.225 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:39:04.226 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-20T12:39:04.226 INFO:teuthology.orchestra.run.vm00.stdout:====================================================================================== 2026-03-20T12:39:04.226 INFO:teuthology.orchestra.run.vm00.stdout:Install 136 Packages 2026-03-20T12:39:04.226 INFO:teuthology.orchestra.run.vm00.stdout:Upgrade 2 Packages 2026-03-20T12:39:04.226 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:39:04.226 INFO:teuthology.orchestra.run.vm00.stdout:Total download size: 267 M 2026-03-20T12:39:04.226 INFO:teuthology.orchestra.run.vm00.stdout:Downloading Packages: 2026-03-20T12:39:05.013 INFO:teuthology.orchestra.run.vm06.stdout:CentOS Stream 9 - Extras packages 27 kB/s | 20 kB 00:00 2026-03-20T12:39:05.879 INFO:teuthology.orchestra.run.vm06.stdout:Extra Packages for Enterprise Linux 26 MB/s | 20 MB 00:00 2026-03-20T12:39:05.927 INFO:teuthology.orchestra.run.vm00.stdout:(1/138): ceph-20.2.0-712.g70f8415b.el9.x86_64.r 14 kB/s | 6.5 kB 00:00 2026-03-20T12:39:06.747 INFO:teuthology.orchestra.run.vm00.stdout:(2/138): ceph-fuse-20.2.0-712.g70f8415b.el9.x86 1.1 MB/s | 939 kB 00:00 2026-03-20T12:39:06.867 INFO:teuthology.orchestra.run.vm00.stdout:(3/138): ceph-immutable-object-cache-20.2.0-712 1.3 MB/s | 154 kB 00:00 2026-03-20T12:39:07.232 INFO:teuthology.orchestra.run.vm00.stdout:(4/138): ceph-mds-20.2.0-712.g70f8415b.el9.x86_ 6.4 MB/s | 2.3 MB 00:00 2026-03-20T12:39:07.311 INFO:teuthology.orchestra.run.vm00.stdout:(5/138): ceph-base-20.2.0-712.g70f8415b.el9.x86 3.2 MB/s | 5.9 MB 00:01 2026-03-20T12:39:07.470 INFO:teuthology.orchestra.run.vm00.stdout:(6/138): ceph-mgr-20.2.0-712.g70f8415b.el9.x86_ 4.0 MB/s | 962 kB 00:00 2026-03-20T12:39:08.219 INFO:teuthology.orchestra.run.vm00.stdout:(7/138): ceph-mon-20.2.0-712.g70f8415b.el9.x86_ 5.6 MB/s | 5.0 MB 00:00 2026-03-20T12:39:08.383 INFO:teuthology.orchestra.run.vm09.stdout:lab-extras 63 kB/s | 50 kB 00:00 2026-03-20T12:39:08.694 INFO:teuthology.orchestra.run.vm00.stdout:(8/138): ceph-common-20.2.0-712.g70f8415b.el9.x 7.3 MB/s | 24 MB 00:03 2026-03-20T12:39:08.810 INFO:teuthology.orchestra.run.vm00.stdout:(9/138): ceph-selinux-20.2.0-712.g70f8415b.el9. 218 kB/s | 25 kB 00:00 2026-03-20T12:39:09.085 INFO:teuthology.orchestra.run.vm00.stdout:(10/138): ceph-osd-20.2.0-712.g70f8415b.el9.x86 11 MB/s | 17 MB 00:01 2026-03-20T12:39:09.203 INFO:teuthology.orchestra.run.vm00.stdout:(11/138): libcephfs-devel-20.2.0-712.g70f8415b. 292 kB/s | 34 kB 00:00 2026-03-20T12:39:09.321 INFO:teuthology.orchestra.run.vm00.stdout:(12/138): libcephfs-proxy2-20.2.0-712.g70f8415b 205 kB/s | 24 kB 00:00 2026-03-20T12:39:09.567 INFO:teuthology.orchestra.run.vm00.stdout:(13/138): libcephfs2-20.2.0-712.g70f8415b.el9.x 3.4 MB/s | 866 kB 00:00 2026-03-20T12:39:09.686 INFO:teuthology.orchestra.run.vm00.stdout:(14/138): libcephsqlite-20.2.0-712.g70f8415b.el 1.3 MB/s | 164 kB 00:00 2026-03-20T12:39:09.806 INFO:teuthology.orchestra.run.vm00.stdout:(15/138): librados-devel-20.2.0-712.g70f8415b.e 1.0 MB/s | 126 kB 00:00 2026-03-20T12:39:09.820 INFO:teuthology.orchestra.run.vm09.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-20T12:39:09.820 INFO:teuthology.orchestra.run.vm09.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-20T12:39:09.855 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout:====================================================================================== 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout:====================================================================================== 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout:Installing: 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: bzip2 x86_64 1.0.8-11.el9 baseos 55 k 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: ceph x86_64 2:20.2.0-712.g70f8415b.el9 ceph 6.5 k 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base x86_64 2:20.2.0-712.g70f8415b.el9 ceph 5.9 M 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 939 k 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache x86_64 2:20.2.0-712.g70f8415b.el9 ceph 154 k 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr x86_64 2:20.2.0-712.g70f8415b.el9 ceph 962 k 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 173 k 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 11 M 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 7.4 M 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 50 k 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 M 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test x86_64 2:20.2.0-712.g70f8415b.el9 ceph 84 M 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 298 k 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: cephadm noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 1.0 M 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel x86_64 2:20.2.0-712.g70f8415b.el9 ceph 34 k 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 866 k 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel x86_64 2:20.2.0-712.g70f8415b.el9 ceph 126 k 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: perl-Test-Harness noarch 1:3.42-461.el9 appstream 295 k 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs x86_64 2:20.2.0-712.g70f8415b.el9 ceph 163 k 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados x86_64 2:20.2.0-712.g70f8415b.el9 ceph 324 k 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 304 k 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw x86_64 2:20.2.0-712.g70f8415b.el9 ceph 99 k 2026-03-20T12:39:09.860 INFO:teuthology.orchestra.run.vm09.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 91 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.9 M 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 180 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: s3cmd noarch 2.4.0-1.el9 epel 206 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout:Upgrading: 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: librados2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 3.5 M 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: librbd1 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.8 M 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout:Installing dependencies: 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 M 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 43 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.3 M 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 290 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon x86_64 2:20.2.0-712.g70f8415b.el9 ceph 5.0 M 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 17 M 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 17 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux x86_64 2:20.2.0-712.g70f8415b.el9 ceph 25 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: fuse x86_64 2.9.9-17.el9 baseos 80 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-proxy2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite x86_64 2:20.2.0-712.g70f8415b.el9 ceph 164 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 250 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: librgw2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 6.4 M 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-20T12:39:09.861 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: perl-Benchmark noarch 1.23-483.el9 appstream 26 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 45 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 ceph 175 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-20T12:39:09.862 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout:Installing weak dependencies: 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout:====================================================================================== 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout:Install 136 Packages 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout:Upgrade 2 Packages 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:39:09.863 INFO:teuthology.orchestra.run.vm09.stdout:Total download size: 267 M 2026-03-20T12:39:09.864 INFO:teuthology.orchestra.run.vm09.stdout:Downloading Packages: 2026-03-20T12:39:09.927 INFO:teuthology.orchestra.run.vm00.stdout:(16/138): libradosstriper1-20.2.0-712.g70f8415b 2.0 MB/s | 250 kB 00:00 2026-03-20T12:39:10.746 INFO:teuthology.orchestra.run.vm06.stdout:lab-extras 54 kB/s | 50 kB 00:00 2026-03-20T12:39:10.768 INFO:teuthology.orchestra.run.vm00.stdout:(17/138): librgw2-20.2.0-712.g70f8415b.el9.x86_ 7.6 MB/s | 6.4 MB 00:00 2026-03-20T12:39:10.889 INFO:teuthology.orchestra.run.vm00.stdout:(18/138): python3-ceph-argparse-20.2.0-712.g70f 373 kB/s | 45 kB 00:00 2026-03-20T12:39:11.010 INFO:teuthology.orchestra.run.vm00.stdout:(19/138): python3-ceph-common-20.2.0-712.g70f84 1.4 MB/s | 175 kB 00:00 2026-03-20T12:39:11.131 INFO:teuthology.orchestra.run.vm00.stdout:(20/138): python3-cephfs-20.2.0-712.g70f8415b.e 1.3 MB/s | 163 kB 00:00 2026-03-20T12:39:11.139 INFO:teuthology.orchestra.run.vm09.stdout:(1/138): ceph-20.2.0-712.g70f8415b.el9.x86_64.r 14 kB/s | 6.5 kB 00:00 2026-03-20T12:39:11.252 INFO:teuthology.orchestra.run.vm00.stdout:(21/138): python3-rados-20.2.0-712.g70f8415b.el 2.6 MB/s | 324 kB 00:00 2026-03-20T12:39:11.374 INFO:teuthology.orchestra.run.vm00.stdout:(22/138): python3-rbd-20.2.0-712.g70f8415b.el9. 2.5 MB/s | 304 kB 00:00 2026-03-20T12:39:11.493 INFO:teuthology.orchestra.run.vm00.stdout:(23/138): python3-rgw-20.2.0-712.g70f8415b.el9. 832 kB/s | 99 kB 00:00 2026-03-20T12:39:11.612 INFO:teuthology.orchestra.run.vm00.stdout:(24/138): rbd-fuse-20.2.0-712.g70f8415b.el9.x86 767 kB/s | 91 kB 00:00 2026-03-20T12:39:11.686 INFO:teuthology.orchestra.run.vm00.stdout:(25/138): ceph-radosgw-20.2.0-712.g70f8415b.el9 6.8 MB/s | 24 MB 00:03 2026-03-20T12:39:11.798 INFO:teuthology.orchestra.run.vm00.stdout:(26/138): rbd-nbd-20.2.0-712.g70f8415b.el9.x86_ 1.6 MB/s | 180 kB 00:00 2026-03-20T12:39:11.906 INFO:teuthology.orchestra.run.vm00.stdout:(27/138): ceph-grafana-dashboards-20.2.0-712.g7 398 kB/s | 43 kB 00:00 2026-03-20T12:39:11.935 INFO:teuthology.orchestra.run.vm09.stdout:(2/138): ceph-fuse-20.2.0-712.g70f8415b.el9.x86 1.2 MB/s | 939 kB 00:00 2026-03-20T12:39:12.017 INFO:teuthology.orchestra.run.vm00.stdout:(28/138): ceph-mgr-cephadm-20.2.0-712.g70f8415b 1.5 MB/s | 173 kB 00:00 2026-03-20T12:39:12.051 INFO:teuthology.orchestra.run.vm09.stdout:(3/138): ceph-immutable-object-cache-20.2.0-712 1.3 MB/s | 154 kB 00:00 2026-03-20T12:39:12.087 INFO:teuthology.orchestra.run.vm00.stdout:(29/138): rbd-mirror-20.2.0-712.g70f8415b.el9.x 6.1 MB/s | 2.9 MB 00:00 2026-03-20T12:39:12.124 INFO:teuthology.orchestra.run.vm06.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-20T12:39:12.125 INFO:teuthology.orchestra.run.vm06.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-20T12:39:12.159 INFO:teuthology.orchestra.run.vm06.stdout:Dependencies resolved. 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout:====================================================================================== 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: Package Arch Version Repository Size 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout:====================================================================================== 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout:Installing: 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: bzip2 x86_64 1.0.8-11.el9 baseos 55 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: ceph x86_64 2:20.2.0-712.g70f8415b.el9 ceph 6.5 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: ceph-base x86_64 2:20.2.0-712.g70f8415b.el9 ceph 5.9 M 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: ceph-fuse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 939 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: ceph-immutable-object-cache x86_64 2:20.2.0-712.g70f8415b.el9 ceph 154 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr x86_64 2:20.2.0-712.g70f8415b.el9 ceph 962 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-cephadm noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 173 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-dashboard noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 11 M 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-diskprediction-local noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 7.4 M 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-rook noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 50 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: ceph-radosgw x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 M 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: ceph-test x86_64 2:20.2.0-712.g70f8415b.el9 ceph 84 M 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: ceph-volume noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 298 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: cephadm noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 1.0 M 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs-devel x86_64 2:20.2.0-712.g70f8415b.el9 ceph 34 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 866 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: librados-devel x86_64 2:20.2.0-712.g70f8415b.el9 ceph 126 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: perl-Test-Harness noarch 1:3.42-461.el9 appstream 295 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: python3-cephfs x86_64 2:20.2.0-712.g70f8415b.el9 ceph 163 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: python3-rados x86_64 2:20.2.0-712.g70f8415b.el9 ceph 324 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: python3-rbd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 304 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: python3-rgw x86_64 2:20.2.0-712.g70f8415b.el9 ceph 99 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: rbd-fuse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 91 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: rbd-mirror x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.9 M 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: rbd-nbd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 180 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: s3cmd noarch 2.4.0-1.el9 epel 206 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout:Upgrading: 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: librados2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 3.5 M 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: librbd1 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.8 M 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout:Installing dependencies: 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 M 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: ceph-grafana-dashboards noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 43 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mds x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.3 M 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 290 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon x86_64 2:20.2.0-712.g70f8415b.el9 ceph 5.0 M 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: ceph-osd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 17 M 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: ceph-prometheus-alerts noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 17 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: ceph-selinux x86_64 2:20.2.0-712.g70f8415b.el9 ceph 25 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-20T12:39:12.164 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: fuse x86_64 2.9.9-17.el9 baseos 80 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs-proxy2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: libcephsqlite x86_64 2:20.2.0-712.g70f8415b.el9 ceph 164 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: libradosstriper1 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 250 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: librgw2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 6.4 M 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: perl-Benchmark noarch 1.23-483.el9 appstream 26 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-argparse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 45 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 ceph 175 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-20T12:39:12.165 INFO:teuthology.orchestra.run.vm06.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout:Installing weak dependencies: 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout:Transaction Summary 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout:====================================================================================== 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout:Install 136 Packages 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout:Upgrade 2 Packages 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout:Total download size: 267 M 2026-03-20T12:39:12.166 INFO:teuthology.orchestra.run.vm06.stdout:Downloading Packages: 2026-03-20T12:39:12.351 INFO:teuthology.orchestra.run.vm09.stdout:(4/138): ceph-base-20.2.0-712.g70f8415b.el9.x86 3.5 MB/s | 5.9 MB 00:01 2026-03-20T12:39:12.424 INFO:teuthology.orchestra.run.vm09.stdout:(5/138): ceph-mds-20.2.0-712.g70f8415b.el9.x86_ 6.3 MB/s | 2.3 MB 00:00 2026-03-20T12:39:12.474 INFO:teuthology.orchestra.run.vm09.stdout:(6/138): ceph-mgr-20.2.0-712.g70f8415b.el9.x86_ 7.7 MB/s | 962 kB 00:00 2026-03-20T12:39:13.080 INFO:teuthology.orchestra.run.vm00.stdout:(30/138): ceph-mgr-diskprediction-local-20.2.0- 7.5 MB/s | 7.4 MB 00:00 2026-03-20T12:39:13.161 INFO:teuthology.orchestra.run.vm09.stdout:(7/138): ceph-mon-20.2.0-712.g70f8415b.el9.x86_ 6.9 MB/s | 5.0 MB 00:00 2026-03-20T12:39:13.201 INFO:teuthology.orchestra.run.vm00.stdout:(31/138): ceph-mgr-modules-core-20.2.0-712.g70f 2.3 MB/s | 290 kB 00:00 2026-03-20T12:39:13.320 INFO:teuthology.orchestra.run.vm00.stdout:(32/138): ceph-mgr-rook-20.2.0-712.g70f8415b.el 424 kB/s | 50 kB 00:00 2026-03-20T12:39:13.439 INFO:teuthology.orchestra.run.vm00.stdout:(33/138): ceph-prometheus-alerts-20.2.0-712.g70 146 kB/s | 17 kB 00:00 2026-03-20T12:39:13.572 INFO:teuthology.orchestra.run.vm00.stdout:(34/138): ceph-mgr-dashboard-20.2.0-712.g70f841 6.8 MB/s | 11 MB 00:01 2026-03-20T12:39:13.577 INFO:teuthology.orchestra.run.vm00.stdout:(35/138): ceph-volume-20.2.0-712.g70f8415b.el9. 2.1 MB/s | 298 kB 00:00 2026-03-20T12:39:13.680 INFO:teuthology.orchestra.run.vm00.stdout:(36/138): bzip2-1.0.8-11.el9.x86_64.rpm 521 kB/s | 55 kB 00:00 2026-03-20T12:39:13.783 INFO:teuthology.orchestra.run.vm00.stdout:(37/138): cryptsetup-2.8.1-3.el9.x86_64.rpm 3.4 MB/s | 351 kB 00:00 2026-03-20T12:39:13.803 INFO:teuthology.orchestra.run.vm00.stdout:(38/138): cephadm-20.2.0-712.g70f8415b.el9.noar 4.3 MB/s | 1.0 MB 00:00 2026-03-20T12:39:13.828 INFO:teuthology.orchestra.run.vm00.stdout:(39/138): fuse-2.9.9-17.el9.x86_64.rpm 1.7 MB/s | 80 kB 00:00 2026-03-20T12:39:13.851 INFO:teuthology.orchestra.run.vm00.stdout:(40/138): ledmon-libs-1.1.0-3.el9.x86_64.rpm 837 kB/s | 40 kB 00:00 2026-03-20T12:39:14.058 INFO:teuthology.orchestra.run.vm00.stdout:(41/138): ceph-test-20.2.0-712.g70f8415b.el9.x8 16 MB/s | 84 MB 00:05 2026-03-20T12:39:14.058 INFO:teuthology.orchestra.run.vm09.stdout:(8/138): ceph-osd-20.2.0-712.g70f8415b.el9.x86_ 11 MB/s | 17 MB 00:01 2026-03-20T12:39:14.060 INFO:teuthology.orchestra.run.vm00.stdout:(42/138): libconfig-1.7.2-9.el9.x86_64.rpm 311 kB/s | 72 kB 00:00 2026-03-20T12:39:14.062 INFO:teuthology.orchestra.run.vm00.stdout:(43/138): libgfortran-11.5.0-14.el9.x86_64.rpm 3.7 MB/s | 794 kB 00:00 2026-03-20T12:39:14.121 INFO:teuthology.orchestra.run.vm00.stdout:(44/138): libquadmath-11.5.0-14.el9.x86_64.rpm 2.9 MB/s | 184 kB 00:00 2026-03-20T12:39:14.122 INFO:teuthology.orchestra.run.vm00.stdout:(45/138): pciutils-3.7.0-7.el9.x86_64.rpm 1.5 MB/s | 93 kB 00:00 2026-03-20T12:39:14.122 INFO:teuthology.orchestra.run.vm09.stdout:(9/138): ceph-common-20.2.0-712.g70f8415b.el9.x 6.9 MB/s | 24 MB 00:03 2026-03-20T12:39:14.123 INFO:teuthology.orchestra.run.vm00.stdout:(46/138): mailcap-2.1.49-5.el9.noarch.rpm 546 kB/s | 33 kB 00:00 2026-03-20T12:39:14.168 INFO:teuthology.orchestra.run.vm00.stdout:(47/138): python3-cffi-1.14.5-5.el9.x86_64.rpm 5.3 MB/s | 253 kB 00:00 2026-03-20T12:39:14.169 INFO:teuthology.orchestra.run.vm09.stdout:(10/138): ceph-selinux-20.2.0-712.g70f8415b.el9 227 kB/s | 25 kB 00:00 2026-03-20T12:39:14.170 INFO:teuthology.orchestra.run.vm00.stdout:(48/138): python3-ply-3.11-14.el9.noarch.rpm 2.2 MB/s | 106 kB 00:00 2026-03-20T12:39:14.200 INFO:teuthology.orchestra.run.vm00.stdout:(49/138): python3-cryptography-36.0.1-5.el9.x86 16 MB/s | 1.2 MB 00:00 2026-03-20T12:39:14.213 INFO:teuthology.orchestra.run.vm00.stdout:(50/138): python3-pycparser-2.20-6.el9.noarch.r 3.0 MB/s | 135 kB 00:00 2026-03-20T12:39:14.220 INFO:teuthology.orchestra.run.vm00.stdout:(51/138): python3-pyparsing-2.4.7-9.el9.noarch. 2.9 MB/s | 150 kB 00:00 2026-03-20T12:39:14.284 INFO:teuthology.orchestra.run.vm00.stdout:(52/138): python3-requests-2.25.1-10.el9.noarch 1.5 MB/s | 126 kB 00:00 2026-03-20T12:39:14.284 INFO:teuthology.orchestra.run.vm09.stdout:(11/138): libcephfs-devel-20.2.0-712.g70f8415b. 299 kB/s | 34 kB 00:00 2026-03-20T12:39:14.285 INFO:teuthology.orchestra.run.vm00.stdout:(53/138): python3-urllib3-1.26.5-7.el9.noarch.r 2.9 MB/s | 218 kB 00:00 2026-03-20T12:39:14.313 INFO:teuthology.orchestra.run.vm06.stdout:(1/138): ceph-20.2.0-712.g70f8415b.el9.x86_64.r 14 kB/s | 6.5 kB 00:00 2026-03-20T12:39:14.384 INFO:teuthology.orchestra.run.vm00.stdout:(54/138): boost-program-options-1.75.0-13.el9.x 1.0 MB/s | 104 kB 00:00 2026-03-20T12:39:14.388 INFO:teuthology.orchestra.run.vm00.stdout:(55/138): unzip-6.0-59.el9.x86_64.rpm 1.1 MB/s | 182 kB 00:00 2026-03-20T12:39:14.388 INFO:teuthology.orchestra.run.vm00.stdout:(56/138): zip-3.0-35.el9.x86_64.rpm 2.5 MB/s | 266 kB 00:00 2026-03-20T12:39:14.395 INFO:teuthology.orchestra.run.vm09.stdout:(12/138): libcephfs-proxy2-20.2.0-712.g70f8415b 218 kB/s | 24 kB 00:00 2026-03-20T12:39:14.401 INFO:teuthology.orchestra.run.vm00.stdout:(57/138): flexiblas-openblas-openmp-3.0.4-9.el9 1.2 MB/s | 15 kB 00:00 2026-03-20T12:39:14.404 INFO:teuthology.orchestra.run.vm00.stdout:(58/138): flexiblas-3.0.4-9.el9.x86_64.rpm 1.5 MB/s | 30 kB 00:00 2026-03-20T12:39:14.487 INFO:teuthology.orchestra.run.vm00.stdout:(59/138): libnbd-1.20.3-4.el9.x86_64.rpm 1.9 MB/s | 164 kB 00:00 2026-03-20T12:39:14.507 INFO:teuthology.orchestra.run.vm00.stdout:(60/138): librabbitmq-0.11.0-7.el9.x86_64.rpm 2.2 MB/s | 45 kB 00:00 2026-03-20T12:39:14.551 INFO:teuthology.orchestra.run.vm00.stdout:(61/138): librdkafka-1.6.1-102.el9.x86_64.rpm 15 MB/s | 662 kB 00:00 2026-03-20T12:39:14.574 INFO:teuthology.orchestra.run.vm00.stdout:(62/138): libstoragemgmt-1.10.1-1.el9.x86_64.rp 10 MB/s | 246 kB 00:00 2026-03-20T12:39:14.601 INFO:teuthology.orchestra.run.vm00.stdout:(63/138): flexiblas-netlib-3.0.4-9.el9.x86_64.r 14 MB/s | 3.0 MB 00:00 2026-03-20T12:39:14.603 INFO:teuthology.orchestra.run.vm00.stdout:(64/138): libxslt-1.1.34-12.el9.x86_64.rpm 8.1 MB/s | 233 kB 00:00 2026-03-20T12:39:14.639 INFO:teuthology.orchestra.run.vm00.stdout:(65/138): lttng-ust-2.12.0-6.el9.x86_64.rpm 7.6 MB/s | 292 kB 00:00 2026-03-20T12:39:14.664 INFO:teuthology.orchestra.run.vm00.stdout:(66/138): lua-5.4.4-4.el9.x86_64.rpm 3.0 MB/s | 188 kB 00:00 2026-03-20T12:39:14.673 INFO:teuthology.orchestra.run.vm00.stdout:(67/138): openblas-0.3.29-1.el9.x86_64.rpm 1.2 MB/s | 42 kB 00:00 2026-03-20T12:39:14.708 INFO:teuthology.orchestra.run.vm00.stdout:(68/138): libpmemobj-1.12.1-1.el9.x86_64.rpm 527 kB/s | 160 kB 00:00 2026-03-20T12:39:14.743 INFO:teuthology.orchestra.run.vm00.stdout:(69/138): perl-Benchmark-1.23-483.el9.noarch.rp 375 kB/s | 26 kB 00:00 2026-03-20T12:39:14.755 INFO:teuthology.orchestra.run.vm00.stdout:(70/138): perl-Test-Harness-3.42-461.el9.noarch 6.2 MB/s | 295 kB 00:00 2026-03-20T12:39:14.772 INFO:teuthology.orchestra.run.vm00.stdout:(71/138): openblas-openmp-0.3.29-1.el9.x86_64.r 49 MB/s | 5.3 MB 00:00 2026-03-20T12:39:14.846 INFO:teuthology.orchestra.run.vm00.stdout:(72/138): protobuf-3.14.0-17.el9.x86_64.rpm 9.8 MB/s | 1.0 MB 00:00 2026-03-20T12:39:14.894 INFO:teuthology.orchestra.run.vm00.stdout:(73/138): python3-devel-3.9.25-3.el9.x86_64.rpm 2.0 MB/s | 244 kB 00:00 2026-03-20T12:39:14.910 INFO:teuthology.orchestra.run.vm00.stdout:(74/138): python3-babel-2.9.1-2.el9.noarch.rpm 38 MB/s | 6.0 MB 00:00 2026-03-20T12:39:14.924 INFO:teuthology.orchestra.run.vm00.stdout:(75/138): python3-jinja2-2.11.3-8.el9.noarch.rp 3.1 MB/s | 249 kB 00:00 2026-03-20T12:39:14.933 INFO:teuthology.orchestra.run.vm00.stdout:(76/138): python3-jmespath-1.0.1-1.el9.noarch.r 1.2 MB/s | 48 kB 00:00 2026-03-20T12:39:14.946 INFO:teuthology.orchestra.run.vm00.stdout:(77/138): python3-libstoragemgmt-1.10.1-1.el9.x 4.8 MB/s | 177 kB 00:00 2026-03-20T12:39:14.948 INFO:teuthology.orchestra.run.vm00.stdout:(78/138): python3-markupsafe-1.1.1-12.el9.x86_6 1.4 MB/s | 35 kB 00:00 2026-03-20T12:39:15.016 INFO:teuthology.orchestra.run.vm00.stdout:(79/138): python3-numpy-f2py-1.23.5-2.el9.x86_6 6.2 MB/s | 442 kB 00:00 2026-03-20T12:39:15.040 INFO:teuthology.orchestra.run.vm00.stdout:(80/138): python3-numpy-1.23.5-2.el9.x86_64.rpm 58 MB/s | 6.1 MB 00:00 2026-03-20T12:39:15.040 INFO:teuthology.orchestra.run.vm00.stdout:(81/138): python3-packaging-20.9-5.el9.noarch.r 838 kB/s | 77 kB 00:00 2026-03-20T12:39:15.060 INFO:teuthology.orchestra.run.vm00.stdout:(82/138): python3-protobuf-3.14.0-17.el9.noarch 6.0 MB/s | 267 kB 00:00 2026-03-20T12:39:15.073 INFO:teuthology.orchestra.run.vm00.stdout:(83/138): python3-pyasn1-0.4.8-7.el9.noarch.rpm 4.6 MB/s | 157 kB 00:00 2026-03-20T12:39:15.079 INFO:teuthology.orchestra.run.vm00.stdout:(84/138): python3-requests-oauthlib-1.3.0-12.el 2.8 MB/s | 54 kB 00:00 2026-03-20T12:39:15.086 INFO:teuthology.orchestra.run.vm00.stdout:(85/138): python3-toml-0.10.2-6.el9.noarch.rpm 5.3 MB/s | 42 kB 00:00 2026-03-20T12:39:15.117 INFO:teuthology.orchestra.run.vm00.stdout:(86/138): qatlib-25.08.0-2.el9.x86_64.rpm 7.7 MB/s | 240 kB 00:00 2026-03-20T12:39:15.126 INFO:teuthology.orchestra.run.vm00.stdout:(87/138): qatlib-service-25.08.0-2.el9.x86_64.r 3.9 MB/s | 37 kB 00:00 2026-03-20T12:39:15.127 INFO:teuthology.orchestra.run.vm00.stdout:(88/138): python3-pyasn1-modules-0.4.8-7.el9.no 3.1 MB/s | 277 kB 00:00 2026-03-20T12:39:15.139 INFO:teuthology.orchestra.run.vm09.stdout:(13/138): libcephfs2-20.2.0-712.g70f8415b.el9.x 1.1 MB/s | 866 kB 00:00 2026-03-20T12:39:15.143 INFO:teuthology.orchestra.run.vm00.stdout:(89/138): qatzip-libs-1.3.1-1.el9.x86_64.rpm 3.8 MB/s | 66 kB 00:00 2026-03-20T12:39:15.161 INFO:teuthology.orchestra.run.vm00.stdout:(90/138): xmlstarlet-1.6.1-20.el9.x86_64.rpm 3.5 MB/s | 64 kB 00:00 2026-03-20T12:39:15.226 INFO:teuthology.orchestra.run.vm00.stdout:(91/138): socat-1.7.4.1-8.el9.x86_64.rpm 3.0 MB/s | 303 kB 00:00 2026-03-20T12:39:15.279 INFO:teuthology.orchestra.run.vm09.stdout:(14/138): libcephsqlite-20.2.0-712.g70f8415b.el 1.1 MB/s | 164 kB 00:00 2026-03-20T12:39:15.279 INFO:teuthology.orchestra.run.vm06.stdout:(2/138): ceph-fuse-20.2.0-712.g70f8415b.el9.x86 972 kB/s | 939 kB 00:00 2026-03-20T12:39:15.394 INFO:teuthology.orchestra.run.vm09.stdout:(15/138): librados-devel-20.2.0-712.g70f8415b.e 1.1 MB/s | 126 kB 00:00 2026-03-20T12:39:15.443 INFO:teuthology.orchestra.run.vm00.stdout:(92/138): lua-devel-5.4.4-4.el9.x86_64.rpm 79 kB/s | 22 kB 00:00 2026-03-20T12:39:15.516 INFO:teuthology.orchestra.run.vm00.stdout:(93/138): python3-scipy-1.9.3-2.el9.x86_64.rpm 44 MB/s | 19 MB 00:00 2026-03-20T12:39:15.517 INFO:teuthology.orchestra.run.vm09.stdout:(16/138): libradosstriper1-20.2.0-712.g70f8415b 2.0 MB/s | 250 kB 00:00 2026-03-20T12:39:15.628 INFO:teuthology.orchestra.run.vm00.stdout:(94/138): protobuf-compiler-3.14.0-17.el9.x86_6 2.1 MB/s | 862 kB 00:00 2026-03-20T12:39:15.722 INFO:teuthology.orchestra.run.vm00.stdout:(95/138): grpc-data-1.46.7-10.el9.noarch.rpm 209 kB/s | 19 kB 00:00 2026-03-20T12:39:15.733 INFO:teuthology.orchestra.run.vm00.stdout:(96/138): abseil-cpp-20211102.0-4.el9.x86_64.rp 1.9 MB/s | 551 kB 00:00 2026-03-20T12:39:15.735 INFO:teuthology.orchestra.run.vm00.stdout:(97/138): gperftools-libs-2.9.1-3.el9.x86_64.rp 1.4 MB/s | 308 kB 00:00 2026-03-20T12:39:15.771 INFO:teuthology.orchestra.run.vm00.stdout:(98/138): libarrow-doc-9.0.0-15.el9.noarch.rpm 651 kB/s | 25 kB 00:00 2026-03-20T12:39:15.775 INFO:teuthology.orchestra.run.vm00.stdout:(99/138): liboath-2.6.12-1.el9.x86_64.rpm 1.2 MB/s | 49 kB 00:00 2026-03-20T12:39:15.818 INFO:teuthology.orchestra.run.vm00.stdout:(100/138): libunwind-1.6.2-1.el9.x86_64.rpm 1.4 MB/s | 67 kB 00:00 2026-03-20T12:39:15.825 INFO:teuthology.orchestra.run.vm00.stdout:(101/138): luarocks-3.9.2-5.el9.noarch.rpm 3.0 MB/s | 151 kB 00:00 2026-03-20T12:39:15.915 INFO:teuthology.orchestra.run.vm00.stdout:(102/138): parquet-libs-9.0.0-15.el9.x86_64.rpm 8.5 MB/s | 838 kB 00:00 2026-03-20T12:39:15.971 INFO:teuthology.orchestra.run.vm00.stdout:(103/138): python3-autocommand-2.2.2-8.el9.noar 533 kB/s | 29 kB 00:00 2026-03-20T12:39:16.014 INFO:teuthology.orchestra.run.vm00.stdout:(104/138): libarrow-9.0.0-15.el9.x86_64.rpm 15 MB/s | 4.4 MB 00:00 2026-03-20T12:39:16.019 INFO:teuthology.orchestra.run.vm00.stdout:(105/138): python3-backports-tarfile-1.2.0-1.el 1.2 MB/s | 60 kB 00:00 2026-03-20T12:39:16.037 INFO:teuthology.orchestra.run.vm00.stdout:(106/138): python3-asyncssh-2.13.2-5.el9.noarch 2.5 MB/s | 548 kB 00:00 2026-03-20T12:39:16.059 INFO:teuthology.orchestra.run.vm00.stdout:(107/138): python3-cachetools-4.2.4-1.el9.noarc 814 kB/s | 32 kB 00:00 2026-03-20T12:39:16.067 INFO:teuthology.orchestra.run.vm00.stdout:(108/138): python3-bcrypt-3.2.2-1.el9.x86_64.rp 817 kB/s | 43 kB 00:00 2026-03-20T12:39:16.075 INFO:teuthology.orchestra.run.vm00.stdout:(109/138): python3-certifi-2023.05.07-4.el9.noa 383 kB/s | 14 kB 00:00 2026-03-20T12:39:16.119 INFO:teuthology.orchestra.run.vm00.stdout:(110/138): python3-cheroot-10.0.1-4.el9.noarch. 2.8 MB/s | 173 kB 00:00 2026-03-20T12:39:16.137 INFO:teuthology.orchestra.run.vm00.stdout:(111/138): python3-google-auth-2.45.0-1.el9.noa 4.0 MB/s | 254 kB 00:00 2026-03-20T12:39:16.152 INFO:teuthology.orchestra.run.vm00.stdout:(112/138): python3-cherrypy-18.6.1-2.el9.noarch 4.1 MB/s | 358 kB 00:00 2026-03-20T12:39:16.181 INFO:teuthology.orchestra.run.vm00.stdout:(113/138): python3-grpcio-tools-1.46.7-10.el9.x 3.2 MB/s | 144 kB 00:00 2026-03-20T12:39:16.216 INFO:teuthology.orchestra.run.vm00.stdout:(114/138): python3-jaraco-classes-3.2.1-5.el9.n 507 kB/s | 18 kB 00:00 2026-03-20T12:39:16.265 INFO:teuthology.orchestra.run.vm06.stdout:(3/138): ceph-immutable-object-cache-20.2.0-712 156 kB/s | 154 kB 00:00 2026-03-20T12:39:16.300 INFO:teuthology.orchestra.run.vm00.stdout:(115/138): python3-jaraco-collections-3.0.0-8.e 277 kB/s | 23 kB 00:00 2026-03-20T12:39:16.309 INFO:teuthology.orchestra.run.vm00.stdout:(116/138): python3-jaraco-8.2.1-3.el9.noarch.rp 68 kB/s | 11 kB 00:00 2026-03-20T12:39:16.316 INFO:teuthology.orchestra.run.vm09.stdout:(17/138): librgw2-20.2.0-712.g70f8415b.el9.x86_ 8.0 MB/s | 6.4 MB 00:00 2026-03-20T12:39:16.349 INFO:teuthology.orchestra.run.vm00.stdout:(117/138): python3-jaraco-context-6.0.1-3.el9.n 403 kB/s | 20 kB 00:00 2026-03-20T12:39:16.380 INFO:teuthology.orchestra.run.vm00.stdout:(118/138): python3-jaraco-functools-3.5.0-2.el9 274 kB/s | 19 kB 00:00 2026-03-20T12:39:16.386 INFO:teuthology.orchestra.run.vm00.stdout:(119/138): python3-jaraco-text-4.0.0-2.el9.noar 719 kB/s | 26 kB 00:00 2026-03-20T12:39:16.426 INFO:teuthology.orchestra.run.vm09.stdout:(18/138): python3-ceph-argparse-20.2.0-712.g70f 409 kB/s | 45 kB 00:00 2026-03-20T12:39:16.436 INFO:teuthology.orchestra.run.vm00.stdout:(120/138): python3-more-itertools-8.12.0-2.el9. 1.6 MB/s | 79 kB 00:00 2026-03-20T12:39:16.460 INFO:teuthology.orchestra.run.vm00.stdout:(121/138): python3-kubernetes-26.1.0-3.el9.noar 13 MB/s | 1.0 MB 00:00 2026-03-20T12:39:16.511 INFO:teuthology.orchestra.run.vm00.stdout:(122/138): python3-portend-3.1.0-2.el9.noarch.r 329 kB/s | 16 kB 00:00 2026-03-20T12:39:16.530 INFO:teuthology.orchestra.run.vm00.stdout:(123/138): python3-natsort-7.1.1-5.el9.noarch.r 616 kB/s | 58 kB 00:00 2026-03-20T12:39:16.539 INFO:teuthology.orchestra.run.vm09.stdout:(19/138): python3-ceph-common-20.2.0-712.g70f84 1.5 MB/s | 175 kB 00:00 2026-03-20T12:39:16.543 INFO:teuthology.orchestra.run.vm00.stdout:(124/138): python3-grpcio-1.46.7-10.el9.x86_64. 4.8 MB/s | 2.0 MB 00:00 2026-03-20T12:39:16.545 INFO:teuthology.orchestra.run.vm00.stdout:(125/138): python3-pyOpenSSL-21.0.0-1.el9.noarc 2.6 MB/s | 90 kB 00:00 2026-03-20T12:39:16.565 INFO:teuthology.orchestra.run.vm00.stdout:(126/138): python3-repoze-lru-0.7-16.el9.noarch 868 kB/s | 31 kB 00:00 2026-03-20T12:39:16.581 INFO:teuthology.orchestra.run.vm00.stdout:(127/138): python3-rsa-4.9-2.el9.noarch.rpm 1.6 MB/s | 59 kB 00:00 2026-03-20T12:39:16.601 INFO:teuthology.orchestra.run.vm00.stdout:(128/138): python3-tempora-5.0.0-2.el9.noarch.r 1.0 MB/s | 36 kB 00:00 2026-03-20T12:39:16.603 INFO:teuthology.orchestra.run.vm00.stdout:(129/138): python3-routes-2.5.1-5.el9.noarch.rp 3.1 MB/s | 188 kB 00:00 2026-03-20T12:39:16.639 INFO:teuthology.orchestra.run.vm00.stdout:(130/138): python3-xmltodict-0.12.0-15.el9.noar 621 kB/s | 22 kB 00:00 2026-03-20T12:39:16.643 INFO:teuthology.orchestra.run.vm00.stdout:(131/138): python3-websocket-client-1.2.3-2.el9 2.1 MB/s | 90 kB 00:00 2026-03-20T12:39:16.650 INFO:teuthology.orchestra.run.vm00.stdout:(132/138): python3-typing-extensions-4.15.0-1.e 1.2 MB/s | 86 kB 00:00 2026-03-20T12:39:16.652 INFO:teuthology.orchestra.run.vm09.stdout:(20/138): python3-cephfs-20.2.0-712.g70f8415b.e 1.4 MB/s | 163 kB 00:00 2026-03-20T12:39:16.674 INFO:teuthology.orchestra.run.vm00.stdout:(133/138): re2-20211101-20.el9.x86_64.rpm 6.0 MB/s | 191 kB 00:00 2026-03-20T12:39:16.679 INFO:teuthology.orchestra.run.vm00.stdout:(134/138): python3-zc-lockfile-2.0-10.el9.noarc 506 kB/s | 20 kB 00:00 2026-03-20T12:39:16.728 INFO:teuthology.orchestra.run.vm09.stdout:(21/138): ceph-radosgw-20.2.0-712.g70f8415b.el9 6.6 MB/s | 24 MB 00:03 2026-03-20T12:39:16.746 INFO:teuthology.orchestra.run.vm00.stdout:(135/138): s3cmd-2.4.0-1.el9.noarch.rpm 2.1 MB/s | 206 kB 00:00 2026-03-20T12:39:16.777 INFO:teuthology.orchestra.run.vm09.stdout:(22/138): python3-rados-20.2.0-712.g70f8415b.el 2.5 MB/s | 324 kB 00:00 2026-03-20T12:39:16.846 INFO:teuthology.orchestra.run.vm09.stdout:(23/138): python3-rbd-20.2.0-712.g70f8415b.el9. 2.5 MB/s | 304 kB 00:00 2026-03-20T12:39:16.856 INFO:teuthology.orchestra.run.vm00.stdout:(136/138): thrift-0.15.0-4.el9.x86_64.rpm 8.7 MB/s | 1.6 MB 00:00 2026-03-20T12:39:16.889 INFO:teuthology.orchestra.run.vm09.stdout:(24/138): python3-rgw-20.2.0-712.g70f8415b.el9. 884 kB/s | 99 kB 00:00 2026-03-20T12:39:16.962 INFO:teuthology.orchestra.run.vm09.stdout:(25/138): rbd-fuse-20.2.0-712.g70f8415b.el9.x86 788 kB/s | 91 kB 00:00 2026-03-20T12:39:17.080 INFO:teuthology.orchestra.run.vm09.stdout:(26/138): rbd-nbd-20.2.0-712.g70f8415b.el9.x86_ 1.5 MB/s | 180 kB 00:00 2026-03-20T12:39:17.087 INFO:teuthology.orchestra.run.vm06.stdout:(4/138): ceph-mds-20.2.0-712.g70f8415b.el9.x86_ 2.8 MB/s | 2.3 MB 00:00 2026-03-20T12:39:17.194 INFO:teuthology.orchestra.run.vm09.stdout:(27/138): ceph-grafana-dashboards-20.2.0-712.g7 379 kB/s | 43 kB 00:00 2026-03-20T12:39:17.310 INFO:teuthology.orchestra.run.vm09.stdout:(28/138): ceph-mgr-cephadm-20.2.0-712.g70f8415b 1.5 MB/s | 173 kB 00:00 2026-03-20T12:39:17.338 INFO:teuthology.orchestra.run.vm09.stdout:(29/138): rbd-mirror-20.2.0-712.g70f8415b.el9.x 6.5 MB/s | 2.9 MB 00:00 2026-03-20T12:39:17.440 INFO:teuthology.orchestra.run.vm06.stdout:(5/138): ceph-mgr-20.2.0-712.g70f8415b.el9.x86_ 2.7 MB/s | 962 kB 00:00 2026-03-20T12:39:17.734 INFO:teuthology.orchestra.run.vm00.stdout:(137/138): librbd1-20.2.0-712.g70f8415b.el9.x86 2.9 MB/s | 2.8 MB 00:00 2026-03-20T12:39:17.746 INFO:teuthology.orchestra.run.vm00.stdout:(138/138): librados2-20.2.0-712.g70f8415b.el9.x 3.3 MB/s | 3.5 MB 00:01 2026-03-20T12:39:17.750 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------------------------------------------------------- 2026-03-20T12:39:17.750 INFO:teuthology.orchestra.run.vm00.stdout:Total 20 MB/s | 267 MB 00:13 2026-03-20T12:39:18.351 INFO:teuthology.orchestra.run.vm09.stdout:(30/138): ceph-mgr-diskprediction-local-20.2.0- 7.3 MB/s | 7.4 MB 00:01 2026-03-20T12:39:18.408 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-20T12:39:18.464 INFO:teuthology.orchestra.run.vm09.stdout:(31/138): ceph-mgr-modules-core-20.2.0-712.g70f 2.5 MB/s | 290 kB 00:00 2026-03-20T12:39:18.475 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-20T12:39:18.475 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-20T12:39:18.574 INFO:teuthology.orchestra.run.vm09.stdout:(32/138): ceph-mgr-rook-20.2.0-712.g70f8415b.el 455 kB/s | 50 kB 00:00 2026-03-20T12:39:18.685 INFO:teuthology.orchestra.run.vm09.stdout:(33/138): ceph-prometheus-alerts-20.2.0-712.g70 157 kB/s | 17 kB 00:00 2026-03-20T12:39:18.798 INFO:teuthology.orchestra.run.vm09.stdout:(34/138): ceph-volume-20.2.0-712.g70f8415b.el9. 2.6 MB/s | 298 kB 00:00 2026-03-20T12:39:19.020 INFO:teuthology.orchestra.run.vm09.stdout:(35/138): cephadm-20.2.0-712.g70f8415b.el9.noar 4.5 MB/s | 1.0 MB 00:00 2026-03-20T12:39:19.075 INFO:teuthology.orchestra.run.vm06.stdout:(6/138): ceph-mon-20.2.0-712.g70f8415b.el9.x86_ 3.1 MB/s | 5.0 MB 00:01 2026-03-20T12:39:19.501 INFO:teuthology.orchestra.run.vm09.stdout:(36/138): bzip2-1.0.8-11.el9.x86_64.rpm 114 kB/s | 55 kB 00:00 2026-03-20T12:39:19.576 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-20T12:39:19.577 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-20T12:39:20.525 INFO:teuthology.orchestra.run.vm09.stdout:(37/138): cryptsetup-2.8.1-3.el9.x86_64.rpm 343 kB/s | 351 kB 00:01 2026-03-20T12:39:20.753 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-20T12:39:20.759 INFO:teuthology.orchestra.run.vm09.stdout:(38/138): fuse-2.9.9-17.el9.x86_64.rpm 341 kB/s | 80 kB 00:00 2026-03-20T12:39:20.763 INFO:teuthology.orchestra.run.vm00.stdout: Installing : thrift-0.15.0-4.el9.x86_64 1/140 2026-03-20T12:39:20.766 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 2/140 2026-03-20T12:39:20.781 INFO:teuthology.orchestra.run.vm00.stdout: Installing : liboath-2.6.12-1.el9.x86_64 3/140 2026-03-20T12:39:20.977 INFO:teuthology.orchestra.run.vm00.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 4/140 2026-03-20T12:39:20.981 INFO:teuthology.orchestra.run.vm00.stdout: Upgrading : librados2-2:20.2.0-712.g70f8415b.el9.x86_64 5/140 2026-03-20T12:39:20.997 INFO:teuthology.orchestra.run.vm09.stdout:(39/138): ledmon-libs-1.1.0-3.el9.x86_64.rpm 170 kB/s | 40 kB 00:00 2026-03-20T12:39:21.021 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librados2-2:20.2.0-712.g70f8415b.el9.x86_64 5/140 2026-03-20T12:39:21.034 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 6/140 2026-03-20T12:39:21.038 INFO:teuthology.orchestra.run.vm00.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/140 2026-03-20T12:39:21.043 INFO:teuthology.orchestra.run.vm00.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/140 2026-03-20T12:39:21.046 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 9/140 2026-03-20T12:39:21.053 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 10/140 2026-03-20T12:39:21.227 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 11/140 2026-03-20T12:39:21.229 INFO:teuthology.orchestra.run.vm00.stdout: Upgrading : librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T12:39:21.256 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T12:39:21.257 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 13/140 2026-03-20T12:39:21.290 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 13/140 2026-03-20T12:39:21.292 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T12:39:21.293 INFO:teuthology.orchestra.run.vm09.stdout:(40/138): libconfig-1.7.2-9.el9.x86_64.rpm 243 kB/s | 72 kB 00:00 2026-03-20T12:39:21.313 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T12:39:21.356 INFO:teuthology.orchestra.run.vm00.stdout: Installing : re2-1:20211101-20.el9.x86_64 15/140 2026-03-20T12:39:21.388 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 16/140 2026-03-20T12:39:21.401 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/140 2026-03-20T12:39:21.409 INFO:teuthology.orchestra.run.vm00.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 18/140 2026-03-20T12:39:21.413 INFO:teuthology.orchestra.run.vm00.stdout: Installing : lua-5.4.4-4.el9.x86_64 19/140 2026-03-20T12:39:21.420 INFO:teuthology.orchestra.run.vm00.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 20/140 2026-03-20T12:39:21.463 INFO:teuthology.orchestra.run.vm00.stdout: Installing : unzip-6.0-59.el9.x86_64 21/140 2026-03-20T12:39:21.481 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 22/140 2026-03-20T12:39:21.487 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 23/140 2026-03-20T12:39:21.499 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 24/140 2026-03-20T12:39:21.503 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 25/140 2026-03-20T12:39:21.546 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 26/140 2026-03-20T12:39:21.576 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x 27/140 2026-03-20T12:39:21.580 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9 28/140 2026-03-20T12:39:21.580 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 29/140 2026-03-20T12:39:21.636 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 29/140 2026-03-20T12:39:21.639 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 30/140 2026-03-20T12:39:21.666 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 30/140 2026-03-20T12:39:21.682 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 31/140 2026-03-20T12:39:21.690 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/140 2026-03-20T12:39:21.723 INFO:teuthology.orchestra.run.vm00.stdout: Installing : zip-3.0-35.el9.x86_64 33/140 2026-03-20T12:39:21.729 INFO:teuthology.orchestra.run.vm00.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/140 2026-03-20T12:39:21.738 INFO:teuthology.orchestra.run.vm00.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/140 2026-03-20T12:39:21.800 INFO:teuthology.orchestra.run.vm00.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/140 2026-03-20T12:39:21.820 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 37/140 2026-03-20T12:39:21.843 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-rsa-4.9-2.el9.noarch 38/140 2026-03-20T12:39:21.849 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 39/140 2026-03-20T12:39:21.860 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/140 2026-03-20T12:39:21.868 INFO:teuthology.orchestra.run.vm00.stdout: Installing : librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 41/140 2026-03-20T12:39:21.873 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/140 2026-03-20T12:39:21.900 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/140 2026-03-20T12:39:21.908 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/140 2026-03-20T12:39:21.915 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 45/140 2026-03-20T12:39:21.931 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 46/140 2026-03-20T12:39:21.946 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 47/140 2026-03-20T12:39:21.953 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 48/140 2026-03-20T12:39:21.968 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 49/140 2026-03-20T12:39:22.029 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 50/140 2026-03-20T12:39:22.458 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 51/140 2026-03-20T12:39:22.474 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 52/140 2026-03-20T12:39:22.479 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 53/140 2026-03-20T12:39:22.487 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 54/140 2026-03-20T12:39:22.492 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 55/140 2026-03-20T12:39:22.501 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 56/140 2026-03-20T12:39:22.505 INFO:teuthology.orchestra.run.vm00.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 57/140 2026-03-20T12:39:22.507 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 58/140 2026-03-20T12:39:22.540 INFO:teuthology.orchestra.run.vm00.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 59/140 2026-03-20T12:39:22.595 INFO:teuthology.orchestra.run.vm00.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 60/140 2026-03-20T12:39:22.607 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 61/140 2026-03-20T12:39:22.616 INFO:teuthology.orchestra.run.vm00.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 62/140 2026-03-20T12:39:22.621 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 63/140 2026-03-20T12:39:22.630 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 64/140 2026-03-20T12:39:22.635 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 65/140 2026-03-20T12:39:22.645 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 66/140 2026-03-20T12:39:22.653 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 67/140 2026-03-20T12:39:22.691 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 68/140 2026-03-20T12:39:22.706 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 69/140 2026-03-20T12:39:22.720 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 70/140 2026-03-20T12:39:22.729 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 71/140 2026-03-20T12:39:22.777 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 72/140 2026-03-20T12:39:23.067 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 73/140 2026-03-20T12:39:23.101 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 74/140 2026-03-20T12:39:23.104 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 75/140 2026-03-20T12:39:23.108 INFO:teuthology.orchestra.run.vm00.stdout: Installing : perl-Benchmark-1.23-483.el9.noarch 76/140 2026-03-20T12:39:23.172 INFO:teuthology.orchestra.run.vm00.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/140 2026-03-20T12:39:23.176 INFO:teuthology.orchestra.run.vm00.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/140 2026-03-20T12:39:23.200 INFO:teuthology.orchestra.run.vm00.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/140 2026-03-20T12:39:23.532 INFO:teuthology.orchestra.run.vm09.stdout:(41/138): libgfortran-11.5.0-14.el9.x86_64.rpm 355 kB/s | 794 kB 00:02 2026-03-20T12:39:23.619 INFO:teuthology.orchestra.run.vm00.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/140 2026-03-20T12:39:23.713 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/140 2026-03-20T12:39:23.943 INFO:teuthology.orchestra.run.vm09.stdout:(42/138): libquadmath-11.5.0-14.el9.x86_64.rpm 450 kB/s | 184 kB 00:00 2026-03-20T12:39:24.027 INFO:teuthology.orchestra.run.vm09.stdout:(43/138): mailcap-2.1.49-5.el9.noarch.rpm 396 kB/s | 33 kB 00:00 2026-03-20T12:39:24.324 INFO:teuthology.orchestra.run.vm06.stdout:(7/138): ceph-base-20.2.0-712.g70f8415b.el9.x86 574 kB/s | 5.9 MB 00:10 2026-03-20T12:39:24.334 INFO:teuthology.orchestra.run.vm09.stdout:(44/138): pciutils-3.7.0-7.el9.x86_64.rpm 303 kB/s | 93 kB 00:00 2026-03-20T12:39:24.555 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/140 2026-03-20T12:39:24.583 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/140 2026-03-20T12:39:24.590 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/140 2026-03-20T12:39:24.594 INFO:teuthology.orchestra.run.vm00.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/140 2026-03-20T12:39:24.604 INFO:teuthology.orchestra.run.vm00.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 86/140 2026-03-20T12:39:24.947 INFO:teuthology.orchestra.run.vm00.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 87/140 2026-03-20T12:39:24.951 INFO:teuthology.orchestra.run.vm00.stdout: Installing : librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 88/140 2026-03-20T12:39:24.979 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 88/140 2026-03-20T12:39:24.981 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 89/140 2026-03-20T12:39:25.023 INFO:teuthology.orchestra.run.vm09.stdout:(45/138): python3-cffi-1.14.5-5.el9.x86_64.rpm 368 kB/s | 253 kB 00:00 2026-03-20T12:39:26.296 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T12:39:26.301 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T12:39:26.324 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T12:39:26.336 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 91/140 2026-03-20T12:39:26.346 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-packaging-20.9-5.el9.noarch 92/140 2026-03-20T12:39:26.363 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-ply-3.11-14.el9.noarch 93/140 2026-03-20T12:39:26.424 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 94/140 2026-03-20T12:39:26.531 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 95/140 2026-03-20T12:39:26.546 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 96/140 2026-03-20T12:39:26.577 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 97/140 2026-03-20T12:39:26.617 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 98/140 2026-03-20T12:39:26.687 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 99/140 2026-03-20T12:39:26.697 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 100/140 2026-03-20T12:39:26.703 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 101/140 2026-03-20T12:39:26.710 INFO:teuthology.orchestra.run.vm00.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 102/140 2026-03-20T12:39:26.716 INFO:teuthology.orchestra.run.vm00.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 103/140 2026-03-20T12:39:26.717 INFO:teuthology.orchestra.run.vm00.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 104/140 2026-03-20T12:39:26.739 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 104/140 2026-03-20T12:39:27.065 INFO:teuthology.orchestra.run.vm00.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 105/140 2026-03-20T12:39:27.072 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 106/140 2026-03-20T12:39:27.125 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 106/140 2026-03-20T12:39:27.125 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-20T12:39:27.125 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-20T12:39:27.125 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:39:27.131 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 107/140 2026-03-20T12:39:29.617 INFO:teuthology.orchestra.run.vm09.stdout:(46/138): python3-cryptography-36.0.1-5.el9.x86 278 kB/s | 1.2 MB 00:04 2026-03-20T12:39:30.238 INFO:teuthology.orchestra.run.vm09.stdout:(47/138): python3-ply-3.11-14.el9.noarch.rpm 171 kB/s | 106 kB 00:00 2026-03-20T12:39:30.893 INFO:teuthology.orchestra.run.vm09.stdout:(48/138): python3-pycparser-2.20-6.el9.noarch.r 206 kB/s | 135 kB 00:00 2026-03-20T12:39:31.081 INFO:teuthology.orchestra.run.vm09.stdout:(49/138): ceph-mgr-dashboard-20.2.0-712.g70f841 787 kB/s | 11 MB 00:13 2026-03-20T12:39:31.506 INFO:teuthology.orchestra.run.vm09.stdout:(50/138): python3-pyparsing-2.4.7-9.el9.noarch. 246 kB/s | 150 kB 00:00 2026-03-20T12:39:31.509 INFO:teuthology.orchestra.run.vm09.stdout:(51/138): python3-requests-2.25.1-10.el9.noarch 296 kB/s | 126 kB 00:00 2026-03-20T12:39:31.680 INFO:teuthology.orchestra.run.vm09.stdout:(52/138): unzip-6.0-59.el9.x86_64.rpm 1.0 MB/s | 182 kB 00:00 2026-03-20T12:39:31.790 INFO:teuthology.orchestra.run.vm09.stdout:(53/138): zip-3.0-35.el9.x86_64.rpm 2.4 MB/s | 266 kB 00:00 2026-03-20T12:39:32.085 INFO:teuthology.orchestra.run.vm09.stdout:(54/138): boost-program-options-1.75.0-13.el9.x 352 kB/s | 104 kB 00:00 2026-03-20T12:39:32.118 INFO:teuthology.orchestra.run.vm09.stdout:(55/138): flexiblas-3.0.4-9.el9.x86_64.rpm 925 kB/s | 30 kB 00:00 2026-03-20T12:39:32.163 INFO:teuthology.orchestra.run.vm09.stdout:(56/138): python3-urllib3-1.26.5-7.el9.noarch.r 331 kB/s | 218 kB 00:00 2026-03-20T12:39:32.261 INFO:teuthology.orchestra.run.vm09.stdout:(57/138): flexiblas-openblas-openmp-3.0.4-9.el9 151 kB/s | 15 kB 00:00 2026-03-20T12:39:32.596 INFO:teuthology.orchestra.run.vm09.stdout:(58/138): libnbd-1.20.3-4.el9.x86_64.rpm 490 kB/s | 164 kB 00:00 2026-03-20T12:39:32.704 INFO:teuthology.orchestra.run.vm09.stdout:(59/138): libpmemobj-1.12.1-1.el9.x86_64.rpm 1.5 MB/s | 160 kB 00:00 2026-03-20T12:39:32.757 INFO:teuthology.orchestra.run.vm09.stdout:(60/138): librabbitmq-0.11.0-7.el9.x86_64.rpm 854 kB/s | 45 kB 00:00 2026-03-20T12:39:33.502 INFO:teuthology.orchestra.run.vm09.stdout:(61/138): librdkafka-1.6.1-102.el9.x86_64.rpm 889 kB/s | 662 kB 00:00 2026-03-20T12:39:33.503 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 107/140 2026-03-20T12:39:33.503 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /sys 2026-03-20T12:39:33.503 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /proc 2026-03-20T12:39:33.504 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /mnt 2026-03-20T12:39:33.504 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /var/tmp 2026-03-20T12:39:33.504 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /home 2026-03-20T12:39:33.504 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /root 2026-03-20T12:39:33.504 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /tmp 2026-03-20T12:39:33.504 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:39:33.638 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 108/140 2026-03-20T12:39:33.663 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 108/140 2026-03-20T12:39:33.665 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:39:33.666 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-20T12:39:33.666 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-20T12:39:33.666 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-20T12:39:33.666 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:39:33.938 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 109/140 2026-03-20T12:39:33.963 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 109/140 2026-03-20T12:39:33.963 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:39:33.963 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-20T12:39:33.963 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-20T12:39:33.963 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-20T12:39:33.964 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:39:33.973 INFO:teuthology.orchestra.run.vm00.stdout: Installing : mailcap-2.1.49-5.el9.noarch 110/140 2026-03-20T12:39:33.976 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 111/140 2026-03-20T12:39:33.997 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T12:39:33.997 INFO:teuthology.orchestra.run.vm00.stdout:Creating group 'qat' with GID 994. 2026-03-20T12:39:33.997 INFO:teuthology.orchestra.run.vm00.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-20T12:39:33.997 INFO:teuthology.orchestra.run.vm00.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-20T12:39:33.997 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:39:34.009 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T12:39:34.039 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T12:39:34.039 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-20T12:39:34.039 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:39:34.064 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 113/140 2026-03-20T12:39:34.096 INFO:teuthology.orchestra.run.vm00.stdout: Installing : fuse-2.9.9-17.el9.x86_64 114/140 2026-03-20T12:39:34.173 INFO:teuthology.orchestra.run.vm00.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/140 2026-03-20T12:39:34.178 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 116/140 2026-03-20T12:39:34.193 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 116/140 2026-03-20T12:39:34.193 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:39:34.193 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-20T12:39:34.193 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:39:34.928 INFO:teuthology.orchestra.run.vm09.stdout:(62/138): libstoragemgmt-1.10.1-1.el9.x86_64.rp 173 kB/s | 246 kB 00:01 2026-03-20T12:39:35.045 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 117/140 2026-03-20T12:39:35.075 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 117/140 2026-03-20T12:39:35.075 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:39:35.075 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-20T12:39:35.075 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-20T12:39:35.075 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-20T12:39:35.076 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:39:35.221 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: cephadm-2:20.2.0-712.g70f8415b.el9.noarch 118/140 2026-03-20T12:39:35.225 INFO:teuthology.orchestra.run.vm00.stdout: Installing : cephadm-2:20.2.0-712.g70f8415b.el9.noarch 118/140 2026-03-20T12:39:35.235 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el 119/140 2026-03-20T12:39:35.268 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.e 120/140 2026-03-20T12:39:35.271 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 121/140 2026-03-20T12:39:35.638 INFO:teuthology.orchestra.run.vm09.stdout:(63/138): libxslt-1.1.34-12.el9.x86_64.rpm 328 kB/s | 233 kB 00:00 2026-03-20T12:39:36.075 INFO:teuthology.orchestra.run.vm09.stdout:(64/138): lttng-ust-2.12.0-6.el9.x86_64.rpm 670 kB/s | 292 kB 00:00 2026-03-20T12:39:36.432 INFO:teuthology.orchestra.run.vm09.stdout:(65/138): flexiblas-netlib-3.0.4-9.el9.x86_64.r 709 kB/s | 3.0 MB 00:04 2026-03-20T12:39:36.470 INFO:teuthology.orchestra.run.vm09.stdout:(66/138): lua-5.4.4-4.el9.x86_64.rpm 478 kB/s | 188 kB 00:00 2026-03-20T12:39:36.501 INFO:teuthology.orchestra.run.vm09.stdout:(67/138): openblas-0.3.29-1.el9.x86_64.rpm 607 kB/s | 42 kB 00:00 2026-03-20T12:39:36.555 INFO:teuthology.orchestra.run.vm09.stdout:(68/138): perl-Benchmark-1.23-483.el9.noarch.rp 492 kB/s | 26 kB 00:00 2026-03-20T12:39:36.715 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 121/140 2026-03-20T12:39:36.725 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 122/140 2026-03-20T12:39:36.926 INFO:teuthology.orchestra.run.vm09.stdout:(69/138): perl-Test-Harness-3.42-461.el9.noarch 797 kB/s | 295 kB 00:00 2026-03-20T12:39:37.306 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 122/140 2026-03-20T12:39:37.308 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 123/140 2026-03-20T12:39:37.374 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 123/140 2026-03-20T12:39:37.431 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9 124/140 2026-03-20T12:39:37.434 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 125/140 2026-03-20T12:39:37.460 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 125/140 2026-03-20T12:39:37.460 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:39:37.460 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-20T12:39:37.460 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-20T12:39:37.460 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-20T12:39:37.460 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:39:37.477 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 126/140 2026-03-20T12:39:37.492 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 126/140 2026-03-20T12:39:37.541 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 127/140 2026-03-20T12:39:37.604 INFO:teuthology.orchestra.run.vm09.stdout:(70/138): protobuf-3.14.0-17.el9.x86_64.rpm 1.5 MB/s | 1.0 MB 00:00 2026-03-20T12:39:38.800 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 128/140 2026-03-20T12:39:38.803 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 129/140 2026-03-20T12:39:38.828 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 129/140 2026-03-20T12:39:38.828 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:39:38.828 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-20T12:39:38.828 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-20T12:39:38.828 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-20T12:39:38.828 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:39:38.841 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-immutable-object-cache-2:20.2.0-712.g70f841 130/140 2026-03-20T12:39:38.864 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-712.g70f841 130/140 2026-03-20T12:39:38.864 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:39:38.864 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-20T12:39:38.864 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:39:39.017 INFO:teuthology.orchestra.run.vm00.stdout: Installing : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 131/140 2026-03-20T12:39:39.042 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 131/140 2026-03-20T12:39:39.042 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:39:39.042 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-20T12:39:39.042 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-20T12:39:39.042 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-20T12:39:39.042 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:39:39.489 INFO:teuthology.orchestra.run.vm09.stdout:(71/138): openblas-openmp-0.3.29-1.el9.x86_64.r 1.8 MB/s | 5.3 MB 00:03 2026-03-20T12:39:39.583 INFO:teuthology.orchestra.run.vm09.stdout:(72/138): python3-devel-3.9.25-3.el9.x86_64.rpm 2.5 MB/s | 244 kB 00:00 2026-03-20T12:39:39.678 INFO:teuthology.orchestra.run.vm09.stdout:(73/138): python3-jinja2-2.11.3-8.el9.noarch.rp 2.6 MB/s | 249 kB 00:00 2026-03-20T12:39:39.804 INFO:teuthology.orchestra.run.vm09.stdout:(74/138): python3-jmespath-1.0.1-1.el9.noarch.r 377 kB/s | 48 kB 00:00 2026-03-20T12:39:39.868 INFO:teuthology.orchestra.run.vm09.stdout:(75/138): python3-libstoragemgmt-1.10.1-1.el9.x 2.7 MB/s | 177 kB 00:00 2026-03-20T12:39:39.900 INFO:teuthology.orchestra.run.vm09.stdout:(76/138): python3-markupsafe-1.1.1-12.el9.x86_6 1.1 MB/s | 35 kB 00:00 2026-03-20T12:39:39.941 INFO:teuthology.orchestra.run.vm09.stdout:(77/138): python3-babel-2.9.1-2.el9.noarch.rpm 2.6 MB/s | 6.0 MB 00:02 2026-03-20T12:39:40.067 INFO:teuthology.orchestra.run.vm09.stdout:(78/138): python3-numpy-f2py-1.23.5-2.el9.x86_6 3.4 MB/s | 442 kB 00:00 2026-03-20T12:39:40.100 INFO:teuthology.orchestra.run.vm09.stdout:(79/138): python3-packaging-20.9-5.el9.noarch.r 2.4 MB/s | 77 kB 00:00 2026-03-20T12:39:40.253 INFO:teuthology.orchestra.run.vm09.stdout:(80/138): python3-protobuf-3.14.0-17.el9.noarch 1.7 MB/s | 267 kB 00:00 2026-03-20T12:39:40.316 INFO:teuthology.orchestra.run.vm09.stdout:(81/138): python3-pyasn1-0.4.8-7.el9.noarch.rpm 2.5 MB/s | 157 kB 00:00 2026-03-20T12:39:40.410 INFO:teuthology.orchestra.run.vm09.stdout:(82/138): python3-pyasn1-modules-0.4.8-7.el9.no 2.9 MB/s | 277 kB 00:00 2026-03-20T12:39:40.492 INFO:teuthology.orchestra.run.vm09.stdout:(83/138): python3-requests-oauthlib-1.3.0-12.el 666 kB/s | 54 kB 00:00 2026-03-20T12:39:41.589 INFO:teuthology.orchestra.run.vm09.stdout:(84/138): python3-numpy-1.23.5-2.el9.x86_64.rpm 3.6 MB/s | 6.1 MB 00:01 2026-03-20T12:39:41.622 INFO:teuthology.orchestra.run.vm09.stdout:(85/138): python3-toml-0.10.2-6.el9.noarch.rpm 1.2 MB/s | 42 kB 00:00 2026-03-20T12:39:41.686 INFO:teuthology.orchestra.run.vm09.stdout:(86/138): qatlib-25.08.0-2.el9.x86_64.rpm 3.7 MB/s | 240 kB 00:00 2026-03-20T12:39:41.718 INFO:teuthology.orchestra.run.vm09.stdout:(87/138): qatlib-service-25.08.0-2.el9.x86_64.r 1.1 MB/s | 37 kB 00:00 2026-03-20T12:39:41.751 INFO:teuthology.orchestra.run.vm09.stdout:(88/138): qatzip-libs-1.3.1-1.el9.x86_64.rpm 2.0 MB/s | 66 kB 00:00 2026-03-20T12:39:41.845 INFO:teuthology.orchestra.run.vm09.stdout:(89/138): socat-1.7.4.1-8.el9.x86_64.rpm 3.2 MB/s | 303 kB 00:00 2026-03-20T12:39:41.877 INFO:teuthology.orchestra.run.vm09.stdout:(90/138): xmlstarlet-1.6.1-20.el9.x86_64.rpm 1.9 MB/s | 64 kB 00:00 2026-03-20T12:39:41.922 INFO:teuthology.orchestra.run.vm09.stdout:(91/138): lua-devel-5.4.4-4.el9.x86_64.rpm 499 kB/s | 22 kB 00:00 2026-03-20T12:39:42.012 INFO:teuthology.orchestra.run.vm09.stdout:(92/138): protobuf-compiler-3.14.0-17.el9.x86_6 9.7 MB/s | 862 kB 00:00 2026-03-20T12:39:42.565 INFO:teuthology.orchestra.run.vm09.stdout:(93/138): abseil-cpp-20211102.0-4.el9.x86_64.rp 992 kB/s | 551 kB 00:00 2026-03-20T12:39:42.703 INFO:teuthology.orchestra.run.vm09.stdout:(94/138): gperftools-libs-2.9.1-3.el9.x86_64.rp 2.2 MB/s | 308 kB 00:00 2026-03-20T12:39:42.768 INFO:teuthology.orchestra.run.vm09.stdout:(95/138): grpc-data-1.46.7-10.el9.noarch.rpm 296 kB/s | 19 kB 00:00 2026-03-20T12:39:43.356 INFO:teuthology.orchestra.run.vm09.stdout:(96/138): libarrow-9.0.0-15.el9.x86_64.rpm 7.5 MB/s | 4.4 MB 00:00 2026-03-20T12:39:43.444 INFO:teuthology.orchestra.run.vm09.stdout:(97/138): libarrow-doc-9.0.0-15.el9.noarch.rpm 282 kB/s | 25 kB 00:00 2026-03-20T12:39:43.532 INFO:teuthology.orchestra.run.vm09.stdout:(98/138): liboath-2.6.12-1.el9.x86_64.rpm 559 kB/s | 49 kB 00:00 2026-03-20T12:39:43.611 INFO:teuthology.orchestra.run.vm09.stdout:(99/138): libunwind-1.6.2-1.el9.x86_64.rpm 854 kB/s | 67 kB 00:00 2026-03-20T12:39:43.690 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 132/140 2026-03-20T12:39:43.697 INFO:teuthology.orchestra.run.vm00.stdout: Installing : perl-Test-Harness-1:3.42-461.el9.noarch 133/140 2026-03-20T12:39:43.705 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_6 134/140 2026-03-20T12:39:43.717 INFO:teuthology.orchestra.run.vm00.stdout: Installing : rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 135/140 2026-03-20T12:39:43.737 INFO:teuthology.orchestra.run.vm00.stdout: Installing : rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 136/140 2026-03-20T12:39:43.798 INFO:teuthology.orchestra.run.vm09.stdout:(100/138): luarocks-3.9.2-5.el9.noarch.rpm 810 kB/s | 151 kB 00:00 2026-03-20T12:39:43.868 INFO:teuthology.orchestra.run.vm00.stdout: Installing : s3cmd-2.4.0-1.el9.noarch 137/140 2026-03-20T12:39:43.872 INFO:teuthology.orchestra.run.vm00.stdout: Installing : bzip2-1.0.8-11.el9.x86_64 138/140 2026-03-20T12:39:43.872 INFO:teuthology.orchestra.run.vm00.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 139/140 2026-03-20T12:39:43.893 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 139/140 2026-03-20T12:39:43.893 INFO:teuthology.orchestra.run.vm00.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T12:39:43.920 INFO:teuthology.orchestra.run.vm09.stdout:(101/138): parquet-libs-9.0.0-15.el9.x86_64.rpm 6.7 MB/s | 838 kB 00:00 2026-03-20T12:39:44.527 INFO:teuthology.orchestra.run.vm09.stdout:(102/138): python3-scipy-1.9.3-2.el9.x86_64.rpm 4.8 MB/s | 19 MB 00:04 2026-03-20T12:39:44.670 INFO:teuthology.orchestra.run.vm09.stdout:(103/138): python3-asyncssh-2.13.2-5.el9.noarch 732 kB/s | 548 kB 00:00 2026-03-20T12:39:45.108 INFO:teuthology.orchestra.run.vm09.stdout:(104/138): python3-backports-tarfile-1.2.0-1.el 137 kB/s | 60 kB 00:00 2026-03-20T12:39:45.266 INFO:teuthology.orchestra.run.vm09.stdout:(105/138): python3-autocommand-2.2.2-8.el9.noar 40 kB/s | 29 kB 00:00 2026-03-20T12:39:45.299 INFO:teuthology.orchestra.run.vm09.stdout:(106/138): python3-bcrypt-3.2.2-1.el9.x86_64.rp 228 kB/s | 43 kB 00:00 2026-03-20T12:39:45.406 INFO:teuthology.orchestra.run.vm09.stdout:(107/138): python3-cachetools-4.2.4-1.el9.noarc 232 kB/s | 32 kB 00:00 2026-03-20T12:39:45.422 INFO:teuthology.orchestra.run.vm09.stdout:(108/138): python3-certifi-2023.05.07-4.el9.noa 115 kB/s | 14 kB 00:00 2026-03-20T12:39:45.577 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T12:39:45.577 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 1/140 2026-03-20T12:39:45.577 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 2/140 2026-03-20T12:39:45.577 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 3/140 2026-03-20T12:39:45.577 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 4/140 2026-03-20T12:39:45.577 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-immutable-object-cache-2:20.2.0-712.g70f841 5/140 2026-03-20T12:39:45.577 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 6/140 2026-03-20T12:39:45.577 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 7/140 2026-03-20T12:39:45.577 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/140 2026-03-20T12:39:45.577 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 9/140 2026-03-20T12:39:45.577 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 10/140 2026-03-20T12:39:45.577 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 11/140 2026-03-20T12:39:45.578 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T12:39:45.578 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_6 13/140 2026-03-20T12:39:45.578 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T12:39:45.578 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 15/140 2026-03-20T12:39:45.578 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 16/140 2026-03-20T12:39:45.578 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 17/140 2026-03-20T12:39:45.578 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 18/140 2026-03-20T12:39:45.578 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 19/140 2026-03-20T12:39:45.578 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9 20/140 2026-03-20T12:39:45.578 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x 21/140 2026-03-20T12:39:45.579 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 22/140 2026-03-20T12:39:45.579 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 23/140 2026-03-20T12:39:45.579 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 24/140 2026-03-20T12:39:45.579 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 25/140 2026-03-20T12:39:45.579 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 26/140 2026-03-20T12:39:45.580 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 27/140 2026-03-20T12:39:45.580 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 28/140 2026-03-20T12:39:45.580 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.e 29/140 2026-03-20T12:39:45.580 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 30/140 2026-03-20T12:39:45.580 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 31/140 2026-03-20T12:39:45.580 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 32/140 2026-03-20T12:39:45.580 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9 33/140 2026-03-20T12:39:45.580 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 34/140 2026-03-20T12:39:45.580 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el 35/140 2026-03-20T12:39:45.580 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 36/140 2026-03-20T12:39:45.580 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : cephadm-2:20.2.0-712.g70f8415b.el9.noarch 37/140 2026-03-20T12:39:45.580 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : bzip2-1.0.8-11.el9.x86_64 38/140 2026-03-20T12:39:45.580 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 39/140 2026-03-20T12:39:45.580 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : fuse-2.9.9-17.el9.x86_64 40/140 2026-03-20T12:39:45.580 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 41/140 2026-03-20T12:39:45.580 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 42/140 2026-03-20T12:39:45.580 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 43/140 2026-03-20T12:39:45.580 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 44/140 2026-03-20T12:39:45.580 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 45/140 2026-03-20T12:39:45.581 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 46/140 2026-03-20T12:39:45.581 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 47/140 2026-03-20T12:39:45.581 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/140 2026-03-20T12:39:45.581 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ply-3.11-14.el9.noarch 49/140 2026-03-20T12:39:45.581 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 50/140 2026-03-20T12:39:45.581 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 51/140 2026-03-20T12:39:45.581 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 52/140 2026-03-20T12:39:45.581 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 53/140 2026-03-20T12:39:45.581 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : unzip-6.0-59.el9.x86_64 54/140 2026-03-20T12:39:45.581 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : zip-3.0-35.el9.x86_64 55/140 2026-03-20T12:39:45.581 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 56/140 2026-03-20T12:39:45.581 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 57/140 2026-03-20T12:39:45.581 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 58/140 2026-03-20T12:39:45.581 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 59/140 2026-03-20T12:39:45.581 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 60/140 2026-03-20T12:39:45.581 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 61/140 2026-03-20T12:39:45.581 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 62/140 2026-03-20T12:39:45.581 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 63/140 2026-03-20T12:39:45.582 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 64/140 2026-03-20T12:39:45.582 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 65/140 2026-03-20T12:39:45.582 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 66/140 2026-03-20T12:39:45.582 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lua-5.4.4-4.el9.x86_64 67/140 2026-03-20T12:39:45.582 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 68/140 2026-03-20T12:39:45.582 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 69/140 2026-03-20T12:39:45.583 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : perl-Benchmark-1.23-483.el9.noarch 70/140 2026-03-20T12:39:45.583 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : perl-Test-Harness-1:3.42-461.el9.noarch 71/140 2026-03-20T12:39:45.583 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 72/140 2026-03-20T12:39:45.583 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 73/140 2026-03-20T12:39:45.583 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 74/140 2026-03-20T12:39:45.583 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 75/140 2026-03-20T12:39:45.583 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 76/140 2026-03-20T12:39:45.583 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 77/140 2026-03-20T12:39:45.583 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 78/140 2026-03-20T12:39:45.583 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 79/140 2026-03-20T12:39:45.583 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 80/140 2026-03-20T12:39:45.583 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 81/140 2026-03-20T12:39:45.583 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 82/140 2026-03-20T12:39:45.583 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 83/140 2026-03-20T12:39:45.583 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 84/140 2026-03-20T12:39:45.583 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 85/140 2026-03-20T12:39:45.583 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 86/140 2026-03-20T12:39:45.583 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 87/140 2026-03-20T12:39:45.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 88/140 2026-03-20T12:39:45.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 89/140 2026-03-20T12:39:45.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 90/140 2026-03-20T12:39:45.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 91/140 2026-03-20T12:39:45.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 92/140 2026-03-20T12:39:45.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 93/140 2026-03-20T12:39:45.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 94/140 2026-03-20T12:39:45.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 95/140 2026-03-20T12:39:45.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 96/140 2026-03-20T12:39:45.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 97/140 2026-03-20T12:39:45.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 98/140 2026-03-20T12:39:45.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 99/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 100/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 101/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 102/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 103/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 104/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 105/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 106/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 107/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 108/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 109/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 110/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 111/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 112/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 113/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 114/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 115/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 116/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 117/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 118/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 119/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 120/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 121/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 122/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 123/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 124/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 125/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 126/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 127/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 128/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 129/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 130/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 131/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 132/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 133/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : re2-1:20211101-20.el9.x86_64 134/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : s3cmd-2.4.0-1.el9.noarch 135/140 2026-03-20T12:39:45.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 136/140 2026-03-20T12:39:45.586 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librados2-2:20.2.0-712.g70f8415b.el9.x86_64 137/140 2026-03-20T12:39:45.586 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 138/140 2026-03-20T12:39:45.586 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 139/140 2026-03-20T12:39:45.662 INFO:teuthology.orchestra.run.vm06.stdout:(8/138): ceph-osd-20.2.0-712.g70f8415b.el9.x86_ 654 kB/s | 17 MB 00:26 2026-03-20T12:39:45.671 INFO:teuthology.orchestra.run.vm09.stdout:(109/138): python3-cheroot-10.0.1-4.el9.noarch. 655 kB/s | 173 kB 00:00 2026-03-20T12:39:45.802 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T12:39:45.802 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:39:45.802 INFO:teuthology.orchestra.run.vm00.stdout:Upgraded: 2026-03-20T12:39:45.802 INFO:teuthology.orchestra.run.vm00.stdout: librados2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.802 INFO:teuthology.orchestra.run.vm00.stdout: librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.802 INFO:teuthology.orchestra.run.vm00.stdout:Installed: 2026-03-20T12:39:45.802 INFO:teuthology.orchestra.run.vm00.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-20T12:39:45.802 INFO:teuthology.orchestra.run.vm00.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-20T12:39:45.802 INFO:teuthology.orchestra.run.vm00.stdout: bzip2-1.0.8-11.el9.x86_64 2026-03-20T12:39:45.802 INFO:teuthology.orchestra.run.vm00.stdout: ceph-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.802 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.802 INFO:teuthology.orchestra.run.vm00.stdout: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.802 INFO:teuthology.orchestra.run.vm00.stdout: ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.802 INFO:teuthology.orchestra.run.vm00.stdout: ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:39:45.802 INFO:teuthology.orchestra.run.vm00.stdout: ceph-immutable-object-cache-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.802 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: cephadm-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: fuse-2.9.9-17.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: lua-5.4.4-4.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: perl-Benchmark-1.23-483.el9.noarch 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: perl-Test-Harness-1:3.42-461.el9.noarch 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-20T12:39:45.803 INFO:teuthology.orchestra.run.vm00.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-ply-3.11-14.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyparsing-2.4.7-9.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-20T12:39:45.804 INFO:teuthology.orchestra.run.vm00.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-20T12:39:45.805 INFO:teuthology.orchestra.run.vm00.stdout: rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.805 INFO:teuthology.orchestra.run.vm00.stdout: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.805 INFO:teuthology.orchestra.run.vm00.stdout: rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:39:45.805 INFO:teuthology.orchestra.run.vm00.stdout: re2-1:20211101-20.el9.x86_64 2026-03-20T12:39:45.805 INFO:teuthology.orchestra.run.vm00.stdout: s3cmd-2.4.0-1.el9.noarch 2026-03-20T12:39:45.805 INFO:teuthology.orchestra.run.vm00.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-20T12:39:45.805 INFO:teuthology.orchestra.run.vm00.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-20T12:39:45.805 INFO:teuthology.orchestra.run.vm00.stdout: unzip-6.0-59.el9.x86_64 2026-03-20T12:39:45.805 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-20T12:39:45.805 INFO:teuthology.orchestra.run.vm00.stdout: zip-3.0-35.el9.x86_64 2026-03-20T12:39:45.805 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:39:45.805 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-20T12:39:45.827 INFO:teuthology.orchestra.run.vm06.stdout:(9/138): ceph-selinux-20.2.0-712.g70f8415b.el9. 153 kB/s | 25 kB 00:00 2026-03-20T12:39:45.830 INFO:teuthology.orchestra.run.vm09.stdout:(110/138): python3-google-auth-2.45.0-1.el9.noa 1.6 MB/s | 254 kB 00:00 2026-03-20T12:39:45.890 DEBUG:teuthology.parallel:result is None 2026-03-20T12:39:45.996 INFO:teuthology.orchestra.run.vm09.stdout:(111/138): python3-cherrypy-18.6.1-2.el9.noarch 625 kB/s | 358 kB 00:00 2026-03-20T12:39:46.294 INFO:teuthology.orchestra.run.vm09.stdout:(112/138): python3-grpcio-tools-1.46.7-10.el9.x 485 kB/s | 144 kB 00:00 2026-03-20T12:39:46.396 INFO:teuthology.orchestra.run.vm09.stdout:(113/138): python3-jaraco-8.2.1-3.el9.noarch.rp 105 kB/s | 11 kB 00:00 2026-03-20T12:39:46.403 INFO:teuthology.orchestra.run.vm09.stdout:(114/138): python3-grpcio-1.46.7-10.el9.x86_64. 3.6 MB/s | 2.0 MB 00:00 2026-03-20T12:39:46.453 INFO:teuthology.orchestra.run.vm09.stdout:(115/138): python3-jaraco-classes-3.2.1-5.el9.n 313 kB/s | 18 kB 00:00 2026-03-20T12:39:46.487 INFO:teuthology.orchestra.run.vm09.stdout:(116/138): python3-jaraco-collections-3.0.0-8.e 277 kB/s | 23 kB 00:00 2026-03-20T12:39:46.521 INFO:teuthology.orchestra.run.vm09.stdout:(117/138): python3-jaraco-context-6.0.1-3.el9.n 292 kB/s | 20 kB 00:00 2026-03-20T12:39:46.599 INFO:teuthology.orchestra.run.vm09.stdout:(118/138): python3-jaraco-functools-3.5.0-2.el9 172 kB/s | 19 kB 00:00 2026-03-20T12:39:46.616 INFO:teuthology.orchestra.run.vm09.stdout:(119/138): python3-jaraco-text-4.0.0-2.el9.noar 278 kB/s | 26 kB 00:00 2026-03-20T12:39:46.846 INFO:teuthology.orchestra.run.vm09.stdout:(120/138): python3-more-itertools-8.12.0-2.el9. 342 kB/s | 79 kB 00:00 2026-03-20T12:39:46.943 INFO:teuthology.orchestra.run.vm09.stdout:(121/138): python3-kubernetes-26.1.0-3.el9.noar 3.0 MB/s | 1.0 MB 00:00 2026-03-20T12:39:46.970 INFO:teuthology.orchestra.run.vm09.stdout:(122/138): python3-natsort-7.1.1-5.el9.noarch.r 466 kB/s | 58 kB 00:00 2026-03-20T12:39:47.014 INFO:teuthology.orchestra.run.vm09.stdout:(123/138): python3-portend-3.1.0-2.el9.noarch.r 235 kB/s | 16 kB 00:00 2026-03-20T12:39:47.086 INFO:teuthology.orchestra.run.vm09.stdout:(124/138): python3-repoze-lru-0.7-16.el9.noarch 429 kB/s | 31 kB 00:00 2026-03-20T12:39:47.286 INFO:teuthology.orchestra.run.vm09.stdout:(125/138): python3-pyOpenSSL-21.0.0-1.el9.noarc 285 kB/s | 90 kB 00:00 2026-03-20T12:39:47.293 INFO:teuthology.orchestra.run.vm09.stdout:(126/138): python3-routes-2.5.1-5.el9.noarch.rp 910 kB/s | 188 kB 00:00 2026-03-20T12:39:47.351 INFO:teuthology.orchestra.run.vm09.stdout:(127/138): python3-tempora-5.0.0-2.el9.noarch.r 623 kB/s | 36 kB 00:00 2026-03-20T12:39:47.427 INFO:teuthology.orchestra.run.vm09.stdout:(128/138): python3-rsa-4.9-2.el9.noarch.rpm 421 kB/s | 59 kB 00:00 2026-03-20T12:39:47.429 INFO:teuthology.orchestra.run.vm09.stdout:(129/138): python3-typing-extensions-4.15.0-1.e 1.1 MB/s | 86 kB 00:00 2026-03-20T12:39:47.501 INFO:teuthology.orchestra.run.vm09.stdout:(130/138): python3-xmltodict-0.12.0-15.el9.noar 310 kB/s | 22 kB 00:00 2026-03-20T12:39:47.559 INFO:teuthology.orchestra.run.vm09.stdout:(131/138): python3-zc-lockfile-2.0-10.el9.noarc 343 kB/s | 20 kB 00:00 2026-03-20T12:39:47.614 INFO:teuthology.orchestra.run.vm09.stdout:(132/138): python3-websocket-client-1.2.3-2.el9 478 kB/s | 90 kB 00:00 2026-03-20T12:39:47.688 INFO:teuthology.orchestra.run.vm09.stdout:(133/138): re2-20211101-20.el9.x86_64.rpm 1.5 MB/s | 191 kB 00:00 2026-03-20T12:39:47.943 INFO:teuthology.orchestra.run.vm09.stdout:(134/138): s3cmd-2.4.0-1.el9.noarch.rpm 627 kB/s | 206 kB 00:00 2026-03-20T12:39:48.080 INFO:teuthology.orchestra.run.vm09.stdout:(135/138): thrift-0.15.0-4.el9.x86_64.rpm 4.0 MB/s | 1.6 MB 00:00 2026-03-20T12:39:49.111 INFO:teuthology.orchestra.run.vm09.stdout:(136/138): librbd1-20.2.0-712.g70f8415b.el9.x86 2.8 MB/s | 2.8 MB 00:01 2026-03-20T12:39:51.987 INFO:teuthology.orchestra.run.vm06.stdout:(10/138): ceph-common-20.2.0-712.g70f8415b.el9. 638 kB/s | 24 MB 00:38 2026-03-20T12:39:52.111 INFO:teuthology.orchestra.run.vm06.stdout:(11/138): libcephfs-devel-20.2.0-712.g70f8415b. 279 kB/s | 34 kB 00:00 2026-03-20T12:39:52.227 INFO:teuthology.orchestra.run.vm06.stdout:(12/138): libcephfs-proxy2-20.2.0-712.g70f8415b 209 kB/s | 24 kB 00:00 2026-03-20T12:39:52.800 INFO:teuthology.orchestra.run.vm06.stdout:(13/138): libcephfs2-20.2.0-712.g70f8415b.el9.x 1.5 MB/s | 866 kB 00:00 2026-03-20T12:39:52.916 INFO:teuthology.orchestra.run.vm06.stdout:(14/138): libcephsqlite-20.2.0-712.g70f8415b.el 1.4 MB/s | 164 kB 00:00 2026-03-20T12:39:53.033 INFO:teuthology.orchestra.run.vm06.stdout:(15/138): librados-devel-20.2.0-712.g70f8415b.e 1.1 MB/s | 126 kB 00:00 2026-03-20T12:39:53.263 INFO:teuthology.orchestra.run.vm06.stdout:(16/138): libradosstriper1-20.2.0-712.g70f8415b 1.1 MB/s | 250 kB 00:00 2026-03-20T12:39:53.430 INFO:teuthology.orchestra.run.vm09.stdout:(137/138): librados2-20.2.0-712.g70f8415b.el9.x 658 kB/s | 3.5 MB 00:05 2026-03-20T12:39:56.367 INFO:teuthology.orchestra.run.vm06.stdout:(17/138): librgw2-20.2.0-712.g70f8415b.el9.x86_ 2.1 MB/s | 6.4 MB 00:03 2026-03-20T12:39:56.482 INFO:teuthology.orchestra.run.vm06.stdout:(18/138): python3-ceph-argparse-20.2.0-712.g70f 392 kB/s | 45 kB 00:00 2026-03-20T12:39:56.600 INFO:teuthology.orchestra.run.vm06.stdout:(19/138): python3-ceph-common-20.2.0-712.g70f84 1.5 MB/s | 175 kB 00:00 2026-03-20T12:39:56.716 INFO:teuthology.orchestra.run.vm06.stdout:(20/138): python3-cephfs-20.2.0-712.g70f8415b.e 1.4 MB/s | 163 kB 00:00 2026-03-20T12:39:56.946 INFO:teuthology.orchestra.run.vm06.stdout:(21/138): python3-rados-20.2.0-712.g70f8415b.el 1.4 MB/s | 324 kB 00:00 2026-03-20T12:39:57.066 INFO:teuthology.orchestra.run.vm06.stdout:(22/138): python3-rbd-20.2.0-712.g70f8415b.el9. 2.5 MB/s | 304 kB 00:00 2026-03-20T12:39:57.202 INFO:teuthology.orchestra.run.vm06.stdout:(23/138): ceph-radosgw-20.2.0-712.g70f8415b.el9 737 kB/s | 24 MB 00:32 2026-03-20T12:39:57.204 INFO:teuthology.orchestra.run.vm06.stdout:(24/138): python3-rgw-20.2.0-712.g70f8415b.el9. 717 kB/s | 99 kB 00:00 2026-03-20T12:39:57.317 INFO:teuthology.orchestra.run.vm06.stdout:(25/138): rbd-fuse-20.2.0-712.g70f8415b.el9.x86 797 kB/s | 91 kB 00:00 2026-03-20T12:39:57.432 INFO:teuthology.orchestra.run.vm06.stdout:(26/138): rbd-nbd-20.2.0-712.g70f8415b.el9.x86_ 1.5 MB/s | 180 kB 00:00 2026-03-20T12:39:57.544 INFO:teuthology.orchestra.run.vm06.stdout:(27/138): ceph-grafana-dashboards-20.2.0-712.g7 385 kB/s | 43 kB 00:00 2026-03-20T12:39:57.696 INFO:teuthology.orchestra.run.vm06.stdout:(28/138): ceph-mgr-cephadm-20.2.0-712.g70f8415b 1.1 MB/s | 173 kB 00:00 2026-03-20T12:39:58.279 INFO:teuthology.orchestra.run.vm06.stdout:(29/138): rbd-mirror-20.2.0-712.g70f8415b.el9.x 2.7 MB/s | 2.9 MB 00:01 2026-03-20T12:40:00.382 INFO:teuthology.orchestra.run.vm06.stdout:(30/138): ceph-mgr-diskprediction-local-20.2.0- 3.5 MB/s | 7.4 MB 00:02 2026-03-20T12:40:00.547 INFO:teuthology.orchestra.run.vm06.stdout:(31/138): ceph-mgr-modules-core-20.2.0-712.g70f 1.7 MB/s | 290 kB 00:00 2026-03-20T12:40:00.702 INFO:teuthology.orchestra.run.vm06.stdout:(32/138): ceph-mgr-rook-20.2.0-712.g70f8415b.el 323 kB/s | 50 kB 00:00 2026-03-20T12:40:00.869 INFO:teuthology.orchestra.run.vm06.stdout:(33/138): ceph-prometheus-alerts-20.2.0-712.g70 104 kB/s | 17 kB 00:00 2026-03-20T12:40:01.031 INFO:teuthology.orchestra.run.vm06.stdout:(34/138): ceph-volume-20.2.0-712.g70f8415b.el9. 1.8 MB/s | 298 kB 00:00 2026-03-20T12:40:01.369 INFO:teuthology.orchestra.run.vm06.stdout:(35/138): ceph-mgr-dashboard-20.2.0-712.g70f841 2.9 MB/s | 11 MB 00:03 2026-03-20T12:40:01.372 INFO:teuthology.orchestra.run.vm06.stdout:(36/138): cephadm-20.2.0-712.g70f8415b.el9.noar 2.9 MB/s | 1.0 MB 00:00 2026-03-20T12:40:01.621 INFO:teuthology.orchestra.run.vm06.stdout:(37/138): bzip2-1.0.8-11.el9.x86_64.rpm 218 kB/s | 55 kB 00:00 2026-03-20T12:40:01.674 INFO:teuthology.orchestra.run.vm06.stdout:(38/138): cryptsetup-2.8.1-3.el9.x86_64.rpm 1.1 MB/s | 351 kB 00:00 2026-03-20T12:40:01.722 INFO:teuthology.orchestra.run.vm06.stdout:(39/138): fuse-2.9.9-17.el9.x86_64.rpm 790 kB/s | 80 kB 00:00 2026-03-20T12:40:01.725 INFO:teuthology.orchestra.run.vm06.stdout:(40/138): ledmon-libs-1.1.0-3.el9.x86_64.rpm 798 kB/s | 40 kB 00:00 2026-03-20T12:40:01.778 INFO:teuthology.orchestra.run.vm06.stdout:(41/138): libconfig-1.7.2-9.el9.x86_64.rpm 1.3 MB/s | 72 kB 00:00 2026-03-20T12:40:01.833 INFO:teuthology.orchestra.run.vm06.stdout:(42/138): libgfortran-11.5.0-14.el9.x86_64.rpm 7.2 MB/s | 794 kB 00:00 2026-03-20T12:40:01.880 INFO:teuthology.orchestra.run.vm06.stdout:(43/138): libquadmath-11.5.0-14.el9.x86_64.rpm 1.8 MB/s | 184 kB 00:00 2026-03-20T12:40:01.884 INFO:teuthology.orchestra.run.vm06.stdout:(44/138): mailcap-2.1.49-5.el9.noarch.rpm 655 kB/s | 33 kB 00:00 2026-03-20T12:40:01.934 INFO:teuthology.orchestra.run.vm06.stdout:(45/138): pciutils-3.7.0-7.el9.x86_64.rpm 1.7 MB/s | 93 kB 00:00 2026-03-20T12:40:01.937 INFO:teuthology.orchestra.run.vm06.stdout:(46/138): python3-cffi-1.14.5-5.el9.x86_64.rpm 4.6 MB/s | 253 kB 00:00 2026-03-20T12:40:02.015 INFO:teuthology.orchestra.run.vm06.stdout:(47/138): python3-ply-3.11-14.el9.noarch.rpm 1.4 MB/s | 106 kB 00:00 2026-03-20T12:40:02.067 INFO:teuthology.orchestra.run.vm06.stdout:(48/138): python3-pycparser-2.20-6.el9.noarch.r 2.5 MB/s | 135 kB 00:00 2026-03-20T12:40:02.090 INFO:teuthology.orchestra.run.vm06.stdout:(49/138): python3-cryptography-36.0.1-5.el9.x86 8.0 MB/s | 1.2 MB 00:00 2026-03-20T12:40:02.119 INFO:teuthology.orchestra.run.vm06.stdout:(50/138): python3-pyparsing-2.4.7-9.el9.noarch. 2.8 MB/s | 150 kB 00:00 2026-03-20T12:40:02.142 INFO:teuthology.orchestra.run.vm06.stdout:(51/138): python3-requests-2.25.1-10.el9.noarch 2.4 MB/s | 126 kB 00:00 2026-03-20T12:40:02.172 INFO:teuthology.orchestra.run.vm06.stdout:(52/138): python3-urllib3-1.26.5-7.el9.noarch.r 4.0 MB/s | 218 kB 00:00 2026-03-20T12:40:02.205 INFO:teuthology.orchestra.run.vm06.stdout:(53/138): unzip-6.0-59.el9.x86_64.rpm 2.9 MB/s | 182 kB 00:00 2026-03-20T12:40:02.222 INFO:teuthology.orchestra.run.vm06.stdout:(54/138): boost-program-options-1.75.0-13.el9.x 5.9 MB/s | 104 kB 00:00 2026-03-20T12:40:02.226 INFO:teuthology.orchestra.run.vm06.stdout:(55/138): zip-3.0-35.el9.x86_64.rpm 4.9 MB/s | 266 kB 00:00 2026-03-20T12:40:02.239 INFO:teuthology.orchestra.run.vm06.stdout:(56/138): flexiblas-3.0.4-9.el9.x86_64.rpm 1.7 MB/s | 30 kB 00:00 2026-03-20T12:40:02.243 INFO:teuthology.orchestra.run.vm06.stdout:(57/138): flexiblas-openblas-openmp-3.0.4-9.el9 3.8 MB/s | 15 kB 00:00 2026-03-20T12:40:02.290 INFO:teuthology.orchestra.run.vm06.stdout:(58/138): libnbd-1.20.3-4.el9.x86_64.rpm 3.5 MB/s | 164 kB 00:00 2026-03-20T12:40:02.300 INFO:teuthology.orchestra.run.vm06.stdout:(59/138): flexiblas-netlib-3.0.4-9.el9.x86_64.r 41 MB/s | 3.0 MB 00:00 2026-03-20T12:40:02.310 INFO:teuthology.orchestra.run.vm06.stdout:(60/138): librabbitmq-0.11.0-7.el9.x86_64.rpm 4.2 MB/s | 45 kB 00:00 2026-03-20T12:40:02.340 INFO:teuthology.orchestra.run.vm06.stdout:(61/138): libpmemobj-1.12.1-1.el9.x86_64.rpm 3.2 MB/s | 160 kB 00:00 2026-03-20T12:40:02.342 INFO:teuthology.orchestra.run.vm06.stdout:(62/138): librdkafka-1.6.1-102.el9.x86_64.rpm 20 MB/s | 662 kB 00:00 2026-03-20T12:40:02.360 INFO:teuthology.orchestra.run.vm06.stdout:(63/138): libstoragemgmt-1.10.1-1.el9.x86_64.rp 12 MB/s | 246 kB 00:00 2026-03-20T12:40:02.389 INFO:teuthology.orchestra.run.vm06.stdout:(64/138): lttng-ust-2.12.0-6.el9.x86_64.rpm 9.9 MB/s | 292 kB 00:00 2026-03-20T12:40:02.402 INFO:teuthology.orchestra.run.vm06.stdout:(65/138): libxslt-1.1.34-12.el9.x86_64.rpm 3.8 MB/s | 233 kB 00:00 2026-03-20T12:40:02.410 INFO:teuthology.orchestra.run.vm06.stdout:(66/138): lua-5.4.4-4.el9.x86_64.rpm 8.6 MB/s | 188 kB 00:00 2026-03-20T12:40:02.420 INFO:teuthology.orchestra.run.vm06.stdout:(67/138): openblas-0.3.29-1.el9.x86_64.rpm 2.3 MB/s | 42 kB 00:00 2026-03-20T12:40:02.431 INFO:teuthology.orchestra.run.vm06.stdout:(68/138): perl-Benchmark-1.23-483.el9.noarch.rp 2.3 MB/s | 26 kB 00:00 2026-03-20T12:40:02.471 INFO:teuthology.orchestra.run.vm06.stdout:(69/138): perl-Test-Harness-3.42-461.el9.noarch 7.2 MB/s | 295 kB 00:00 2026-03-20T12:40:02.513 INFO:teuthology.orchestra.run.vm06.stdout:(70/138): openblas-openmp-0.3.29-1.el9.x86_64.r 51 MB/s | 5.3 MB 00:00 2026-03-20T12:40:02.528 INFO:teuthology.orchestra.run.vm06.stdout:(71/138): protobuf-3.14.0-17.el9.x86_64.rpm 18 MB/s | 1.0 MB 00:00 2026-03-20T12:40:02.648 INFO:teuthology.orchestra.run.vm06.stdout:(72/138): python3-babel-2.9.1-2.el9.noarch.rpm 44 MB/s | 6.0 MB 00:00 2026-03-20T12:40:02.651 INFO:teuthology.orchestra.run.vm06.stdout:(73/138): python3-devel-3.9.25-3.el9.x86_64.rpm 1.9 MB/s | 244 kB 00:00 2026-03-20T12:40:02.671 INFO:teuthology.orchestra.run.vm06.stdout:(74/138): python3-jinja2-2.11.3-8.el9.noarch.rp 11 MB/s | 249 kB 00:00 2026-03-20T12:40:02.676 INFO:teuthology.orchestra.run.vm06.stdout:(75/138): python3-jmespath-1.0.1-1.el9.noarch.r 1.8 MB/s | 48 kB 00:00 2026-03-20T12:40:02.699 INFO:teuthology.orchestra.run.vm06.stdout:(76/138): python3-libstoragemgmt-1.10.1-1.el9.x 6.2 MB/s | 177 kB 00:00 2026-03-20T12:40:02.700 INFO:teuthology.orchestra.run.vm06.stdout:(77/138): python3-markupsafe-1.1.1-12.el9.x86_6 1.5 MB/s | 35 kB 00:00 2026-03-20T12:40:02.785 INFO:teuthology.orchestra.run.vm06.stdout:(78/138): python3-numpy-f2py-1.23.5-2.el9.x86_6 5.1 MB/s | 442 kB 00:00 2026-03-20T12:40:02.800 INFO:teuthology.orchestra.run.vm06.stdout:(79/138): python3-numpy-1.23.5-2.el9.x86_64.rpm 61 MB/s | 6.1 MB 00:00 2026-03-20T12:40:02.803 INFO:teuthology.orchestra.run.vm06.stdout:(80/138): python3-packaging-20.9-5.el9.noarch.r 4.2 MB/s | 77 kB 00:00 2026-03-20T12:40:02.888 INFO:teuthology.orchestra.run.vm06.stdout:(81/138): python3-pyasn1-0.4.8-7.el9.noarch.rpm 1.8 MB/s | 157 kB 00:00 2026-03-20T12:40:02.888 INFO:teuthology.orchestra.run.vm09.stdout:(138/138): ceph-test-20.2.0-712.g70f8415b.el9.x 1.7 MB/s | 84 MB 00:48 2026-03-20T12:40:02.889 INFO:teuthology.orchestra.run.vm06.stdout:(82/138): python3-protobuf-3.14.0-17.el9.noarch 2.9 MB/s | 267 kB 00:00 2026-03-20T12:40:02.893 INFO:teuthology.orchestra.run.vm09.stdout:-------------------------------------------------------------------------------- 2026-03-20T12:40:02.893 INFO:teuthology.orchestra.run.vm09.stdout:Total 5.0 MB/s | 267 MB 00:53 2026-03-20T12:40:02.911 INFO:teuthology.orchestra.run.vm06.stdout:(83/138): python3-requests-oauthlib-1.3.0-12.el 2.4 MB/s | 54 kB 00:00 2026-03-20T12:40:02.935 INFO:teuthology.orchestra.run.vm06.stdout:(84/138): python3-pyasn1-modules-0.4.8-7.el9.no 5.8 MB/s | 277 kB 00:00 2026-03-20T12:40:02.951 INFO:teuthology.orchestra.run.vm06.stdout:(85/138): python3-toml-0.10.2-6.el9.noarch.rpm 2.6 MB/s | 42 kB 00:00 2026-03-20T12:40:02.982 INFO:teuthology.orchestra.run.vm06.stdout:(86/138): qatlib-25.08.0-2.el9.x86_64.rpm 7.5 MB/s | 240 kB 00:00 2026-03-20T12:40:03.003 INFO:teuthology.orchestra.run.vm06.stdout:(87/138): qatlib-service-25.08.0-2.el9.x86_64.r 1.8 MB/s | 37 kB 00:00 2026-03-20T12:40:03.020 INFO:teuthology.orchestra.run.vm06.stdout:(88/138): qatzip-libs-1.3.1-1.el9.x86_64.rpm 4.0 MB/s | 66 kB 00:00 2026-03-20T12:40:03.048 INFO:teuthology.orchestra.run.vm06.stdout:(89/138): socat-1.7.4.1-8.el9.x86_64.rpm 11 MB/s | 303 kB 00:00 2026-03-20T12:40:03.085 INFO:teuthology.orchestra.run.vm06.stdout:(90/138): xmlstarlet-1.6.1-20.el9.x86_64.rpm 1.7 MB/s | 64 kB 00:00 2026-03-20T12:40:03.204 INFO:teuthology.orchestra.run.vm06.stdout:(91/138): python3-scipy-1.9.3-2.el9.x86_64.rpm 66 MB/s | 19 MB 00:00 2026-03-20T12:40:03.410 INFO:teuthology.orchestra.run.vm06.stdout:(92/138): lua-devel-5.4.4-4.el9.x86_64.rpm 69 kB/s | 22 kB 00:00 2026-03-20T12:40:03.531 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-20T12:40:03.588 INFO:teuthology.orchestra.run.vm06.stdout:(93/138): protobuf-compiler-3.14.0-17.el9.x86_6 2.2 MB/s | 862 kB 00:00 2026-03-20T12:40:03.589 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-20T12:40:03.589 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-20T12:40:04.128 INFO:teuthology.orchestra.run.vm06.stdout:(94/138): abseil-cpp-20211102.0-4.el9.x86_64.rp 769 kB/s | 551 kB 00:00 2026-03-20T12:40:04.173 INFO:teuthology.orchestra.run.vm06.stdout:(95/138): gperftools-libs-2.9.1-3.el9.x86_64.rp 526 kB/s | 308 kB 00:00 2026-03-20T12:40:04.194 INFO:teuthology.orchestra.run.vm06.stdout:(96/138): grpc-data-1.46.7-10.el9.noarch.rpm 294 kB/s | 19 kB 00:00 2026-03-20T12:40:04.262 INFO:teuthology.orchestra.run.vm06.stdout:(97/138): libarrow-doc-9.0.0-15.el9.noarch.rpm 367 kB/s | 25 kB 00:00 2026-03-20T12:40:04.382 INFO:teuthology.orchestra.run.vm06.stdout:(98/138): liboath-2.6.12-1.el9.x86_64.rpm 410 kB/s | 49 kB 00:00 2026-03-20T12:40:04.454 INFO:teuthology.orchestra.run.vm06.stdout:(99/138): libunwind-1.6.2-1.el9.x86_64.rpm 939 kB/s | 67 kB 00:00 2026-03-20T12:40:04.586 INFO:teuthology.orchestra.run.vm06.stdout:(100/138): luarocks-3.9.2-5.el9.noarch.rpm 1.1 MB/s | 151 kB 00:00 2026-03-20T12:40:04.629 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-20T12:40:04.629 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-20T12:40:04.921 INFO:teuthology.orchestra.run.vm06.stdout:(101/138): parquet-libs-9.0.0-15.el9.x86_64.rpm 2.4 MB/s | 838 kB 00:00 2026-03-20T12:40:05.120 INFO:teuthology.orchestra.run.vm06.stdout:(102/138): python3-asyncssh-2.13.2-5.el9.noarch 2.7 MB/s | 548 kB 00:00 2026-03-20T12:40:05.188 INFO:teuthology.orchestra.run.vm06.stdout:(103/138): python3-autocommand-2.2.2-8.el9.noar 434 kB/s | 29 kB 00:00 2026-03-20T12:40:05.257 INFO:teuthology.orchestra.run.vm06.stdout:(104/138): python3-backports-tarfile-1.2.0-1.el 876 kB/s | 60 kB 00:00 2026-03-20T12:40:05.325 INFO:teuthology.orchestra.run.vm06.stdout:(105/138): python3-bcrypt-3.2.2-1.el9.x86_64.rp 640 kB/s | 43 kB 00:00 2026-03-20T12:40:05.393 INFO:teuthology.orchestra.run.vm06.stdout:(106/138): python3-cachetools-4.2.4-1.el9.noarc 476 kB/s | 32 kB 00:00 2026-03-20T12:40:05.466 INFO:teuthology.orchestra.run.vm06.stdout:(107/138): python3-certifi-2023.05.07-4.el9.noa 194 kB/s | 14 kB 00:00 2026-03-20T12:40:05.535 INFO:teuthology.orchestra.run.vm06.stdout:(108/138): python3-cheroot-10.0.1-4.el9.noarch. 2.4 MB/s | 173 kB 00:00 2026-03-20T12:40:05.671 INFO:teuthology.orchestra.run.vm06.stdout:(109/138): python3-cherrypy-18.6.1-2.el9.noarch 2.6 MB/s | 358 kB 00:00 2026-03-20T12:40:05.744 INFO:teuthology.orchestra.run.vm06.stdout:(110/138): python3-google-auth-2.45.0-1.el9.noa 3.4 MB/s | 254 kB 00:00 2026-03-20T12:40:05.786 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-20T12:40:05.795 INFO:teuthology.orchestra.run.vm09.stdout: Installing : thrift-0.15.0-4.el9.x86_64 1/140 2026-03-20T12:40:05.798 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 2/140 2026-03-20T12:40:05.812 INFO:teuthology.orchestra.run.vm09.stdout: Installing : liboath-2.6.12-1.el9.x86_64 3/140 2026-03-20T12:40:05.997 INFO:teuthology.orchestra.run.vm09.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 4/140 2026-03-20T12:40:05.999 INFO:teuthology.orchestra.run.vm09.stdout: Upgrading : librados2-2:20.2.0-712.g70f8415b.el9.x86_64 5/140 2026-03-20T12:40:06.033 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados2-2:20.2.0-712.g70f8415b.el9.x86_64 5/140 2026-03-20T12:40:06.042 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 6/140 2026-03-20T12:40:06.045 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/140 2026-03-20T12:40:06.050 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/140 2026-03-20T12:40:06.054 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 9/140 2026-03-20T12:40:06.060 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 10/140 2026-03-20T12:40:06.202 INFO:teuthology.orchestra.run.vm06.stdout:(111/138): python3-grpcio-1.46.7-10.el9.x86_64. 4.5 MB/s | 2.0 MB 00:00 2026-03-20T12:40:06.209 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 11/140 2026-03-20T12:40:06.211 INFO:teuthology.orchestra.run.vm09.stdout: Upgrading : librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T12:40:06.233 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T12:40:06.234 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 13/140 2026-03-20T12:40:06.257 INFO:teuthology.orchestra.run.vm06.stdout:(112/138): libarrow-9.0.0-15.el9.x86_64.rpm 2.1 MB/s | 4.4 MB 00:02 2026-03-20T12:40:06.258 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 13/140 2026-03-20T12:40:06.260 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T12:40:06.272 INFO:teuthology.orchestra.run.vm06.stdout:(113/138): python3-grpcio-tools-1.46.7-10.el9.x 2.0 MB/s | 144 kB 00:00 2026-03-20T12:40:06.274 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T12:40:06.314 INFO:teuthology.orchestra.run.vm09.stdout: Installing : re2-1:20211101-20.el9.x86_64 15/140 2026-03-20T12:40:06.325 INFO:teuthology.orchestra.run.vm06.stdout:(114/138): python3-jaraco-8.2.1-3.el9.noarch.rp 159 kB/s | 11 kB 00:00 2026-03-20T12:40:06.339 INFO:teuthology.orchestra.run.vm06.stdout:(115/138): python3-jaraco-classes-3.2.1-5.el9.n 267 kB/s | 18 kB 00:00 2026-03-20T12:40:06.340 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 16/140 2026-03-20T12:40:06.353 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/140 2026-03-20T12:40:06.360 INFO:teuthology.orchestra.run.vm09.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 18/140 2026-03-20T12:40:06.364 INFO:teuthology.orchestra.run.vm09.stdout: Installing : lua-5.4.4-4.el9.x86_64 19/140 2026-03-20T12:40:06.369 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 20/140 2026-03-20T12:40:06.392 INFO:teuthology.orchestra.run.vm06.stdout:(116/138): python3-jaraco-collections-3.0.0-8.e 347 kB/s | 23 kB 00:00 2026-03-20T12:40:06.398 INFO:teuthology.orchestra.run.vm09.stdout: Installing : unzip-6.0-59.el9.x86_64 21/140 2026-03-20T12:40:06.405 INFO:teuthology.orchestra.run.vm06.stdout:(117/138): python3-jaraco-context-6.0.1-3.el9.n 295 kB/s | 20 kB 00:00 2026-03-20T12:40:06.416 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 22/140 2026-03-20T12:40:06.421 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 23/140 2026-03-20T12:40:06.428 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 24/140 2026-03-20T12:40:06.453 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 25/140 2026-03-20T12:40:06.459 INFO:teuthology.orchestra.run.vm06.stdout:(118/138): python3-jaraco-functools-3.5.0-2.el9 290 kB/s | 19 kB 00:00 2026-03-20T12:40:06.473 INFO:teuthology.orchestra.run.vm06.stdout:(119/138): python3-jaraco-text-4.0.0-2.el9.noar 392 kB/s | 26 kB 00:00 2026-03-20T12:40:06.494 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 26/140 2026-03-20T12:40:06.501 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x 27/140 2026-03-20T12:40:06.503 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9 28/140 2026-03-20T12:40:06.504 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 29/140 2026-03-20T12:40:06.542 INFO:teuthology.orchestra.run.vm06.stdout:(120/138): python3-more-itertools-8.12.0-2.el9. 1.1 MB/s | 79 kB 00:00 2026-03-20T12:40:06.557 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 29/140 2026-03-20T12:40:06.559 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 30/140 2026-03-20T12:40:06.581 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 30/140 2026-03-20T12:40:06.596 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 31/140 2026-03-20T12:40:06.604 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/140 2026-03-20T12:40:06.609 INFO:teuthology.orchestra.run.vm06.stdout:(121/138): python3-natsort-7.1.1-5.el9.noarch.r 860 kB/s | 58 kB 00:00 2026-03-20T12:40:06.634 INFO:teuthology.orchestra.run.vm09.stdout: Installing : zip-3.0-35.el9.x86_64 33/140 2026-03-20T12:40:06.640 INFO:teuthology.orchestra.run.vm09.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/140 2026-03-20T12:40:06.648 INFO:teuthology.orchestra.run.vm09.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/140 2026-03-20T12:40:06.676 INFO:teuthology.orchestra.run.vm06.stdout:(122/138): python3-portend-3.1.0-2.el9.noarch.r 244 kB/s | 16 kB 00:00 2026-03-20T12:40:06.706 INFO:teuthology.orchestra.run.vm09.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/140 2026-03-20T12:40:06.722 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 37/140 2026-03-20T12:40:06.742 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rsa-4.9-2.el9.noarch 38/140 2026-03-20T12:40:06.745 INFO:teuthology.orchestra.run.vm06.stdout:(123/138): python3-pyOpenSSL-21.0.0-1.el9.noarc 1.3 MB/s | 90 kB 00:00 2026-03-20T12:40:06.748 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 39/140 2026-03-20T12:40:06.757 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/140 2026-03-20T12:40:06.765 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 41/140 2026-03-20T12:40:06.769 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/140 2026-03-20T12:40:06.787 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/140 2026-03-20T12:40:06.793 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/140 2026-03-20T12:40:06.800 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 45/140 2026-03-20T12:40:06.814 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 46/140 2026-03-20T12:40:06.814 INFO:teuthology.orchestra.run.vm06.stdout:(124/138): python3-repoze-lru-0.7-16.el9.noarch 449 kB/s | 31 kB 00:00 2026-03-20T12:40:06.826 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 47/140 2026-03-20T12:40:06.832 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 48/140 2026-03-20T12:40:06.842 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 49/140 2026-03-20T12:40:06.885 INFO:teuthology.orchestra.run.vm06.stdout:(125/138): python3-routes-2.5.1-5.el9.noarch.rp 2.6 MB/s | 188 kB 00:00 2026-03-20T12:40:06.892 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 50/140 2026-03-20T12:40:06.953 INFO:teuthology.orchestra.run.vm06.stdout:(126/138): python3-rsa-4.9-2.el9.noarch.rpm 874 kB/s | 59 kB 00:00 2026-03-20T12:40:07.020 INFO:teuthology.orchestra.run.vm06.stdout:(127/138): python3-tempora-5.0.0-2.el9.noarch.r 536 kB/s | 36 kB 00:00 2026-03-20T12:40:07.254 INFO:teuthology.orchestra.run.vm06.stdout:(128/138): ceph-test-20.2.0-712.g70f8415b.el9.x 3.9 MB/s | 84 MB 00:21 2026-03-20T12:40:07.257 INFO:teuthology.orchestra.run.vm06.stdout:(129/138): python3-kubernetes-26.1.0-3.el9.noar 1.3 MB/s | 1.0 MB 00:00 2026-03-20T12:40:07.259 INFO:teuthology.orchestra.run.vm06.stdout:(130/138): python3-typing-extensions-4.15.0-1.e 361 kB/s | 86 kB 00:00 2026-03-20T12:40:07.298 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 51/140 2026-03-20T12:40:07.315 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 52/140 2026-03-20T12:40:07.321 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 53/140 2026-03-20T12:40:07.325 INFO:teuthology.orchestra.run.vm06.stdout:(131/138): python3-xmltodict-0.12.0-15.el9.noar 329 kB/s | 22 kB 00:00 2026-03-20T12:40:07.325 INFO:teuthology.orchestra.run.vm06.stdout:(132/138): python3-zc-lockfile-2.0-10.el9.noarc 301 kB/s | 20 kB 00:00 2026-03-20T12:40:07.329 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 54/140 2026-03-20T12:40:07.333 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 55/140 2026-03-20T12:40:07.340 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 56/140 2026-03-20T12:40:07.343 INFO:teuthology.orchestra.run.vm09.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 57/140 2026-03-20T12:40:07.346 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 58/140 2026-03-20T12:40:07.377 INFO:teuthology.orchestra.run.vm09.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 59/140 2026-03-20T12:40:07.394 INFO:teuthology.orchestra.run.vm06.stdout:(133/138): re2-20211101-20.el9.x86_64.rpm 2.7 MB/s | 191 kB 00:00 2026-03-20T12:40:07.433 INFO:teuthology.orchestra.run.vm09.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 60/140 2026-03-20T12:40:07.447 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 61/140 2026-03-20T12:40:07.454 INFO:teuthology.orchestra.run.vm09.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 62/140 2026-03-20T12:40:07.459 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 63/140 2026-03-20T12:40:07.467 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 64/140 2026-03-20T12:40:07.472 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 65/140 2026-03-20T12:40:07.482 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 66/140 2026-03-20T12:40:07.487 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 67/140 2026-03-20T12:40:07.515 INFO:teuthology.orchestra.run.vm06.stdout:(134/138): python3-websocket-client-1.2.3-2.el9 344 kB/s | 90 kB 00:00 2026-03-20T12:40:07.522 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 68/140 2026-03-20T12:40:07.536 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 69/140 2026-03-20T12:40:07.546 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 70/140 2026-03-20T12:40:07.555 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 71/140 2026-03-20T12:40:07.596 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 72/140 2026-03-20T12:40:07.653 INFO:teuthology.orchestra.run.vm06.stdout:(135/138): s3cmd-2.4.0-1.el9.noarch.rpm 629 kB/s | 206 kB 00:00 2026-03-20T12:40:07.663 INFO:teuthology.orchestra.run.vm06.stdout:(136/138): thrift-0.15.0-4.el9.x86_64.rpm 5.9 MB/s | 1.6 MB 00:00 2026-03-20T12:40:07.864 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 73/140 2026-03-20T12:40:07.895 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 74/140 2026-03-20T12:40:07.899 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 75/140 2026-03-20T12:40:07.903 INFO:teuthology.orchestra.run.vm09.stdout: Installing : perl-Benchmark-1.23-483.el9.noarch 76/140 2026-03-20T12:40:07.964 INFO:teuthology.orchestra.run.vm09.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/140 2026-03-20T12:40:07.967 INFO:teuthology.orchestra.run.vm09.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/140 2026-03-20T12:40:07.992 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/140 2026-03-20T12:40:08.226 INFO:teuthology.orchestra.run.vm06.stdout:(137/138): librados2-20.2.0-712.g70f8415b.el9.x 5.0 MB/s | 3.5 MB 00:00 2026-03-20T12:40:08.390 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/140 2026-03-20T12:40:08.482 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/140 2026-03-20T12:40:08.886 INFO:teuthology.orchestra.run.vm06.stdout:(138/138): librbd1-20.2.0-712.g70f8415b.el9.x86 2.3 MB/s | 2.8 MB 00:01 2026-03-20T12:40:08.888 INFO:teuthology.orchestra.run.vm06.stdout:-------------------------------------------------------------------------------- 2026-03-20T12:40:08.888 INFO:teuthology.orchestra.run.vm06.stdout:Total 4.7 MB/s | 267 MB 00:56 2026-03-20T12:40:09.294 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/140 2026-03-20T12:40:09.321 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/140 2026-03-20T12:40:09.327 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/140 2026-03-20T12:40:09.331 INFO:teuthology.orchestra.run.vm09.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/140 2026-03-20T12:40:09.339 INFO:teuthology.orchestra.run.vm09.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 86/140 2026-03-20T12:40:09.594 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction check 2026-03-20T12:40:09.664 INFO:teuthology.orchestra.run.vm06.stdout:Transaction check succeeded. 2026-03-20T12:40:09.664 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction test 2026-03-20T12:40:09.673 INFO:teuthology.orchestra.run.vm09.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 87/140 2026-03-20T12:40:09.675 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 88/140 2026-03-20T12:40:09.697 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 88/140 2026-03-20T12:40:09.700 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 89/140 2026-03-20T12:40:10.780 INFO:teuthology.orchestra.run.vm06.stdout:Transaction test succeeded. 2026-03-20T12:40:10.780 INFO:teuthology.orchestra.run.vm06.stdout:Running transaction 2026-03-20T12:40:10.982 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T12:40:10.987 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T12:40:11.006 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T12:40:11.020 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 91/140 2026-03-20T12:40:11.030 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-packaging-20.9-5.el9.noarch 92/140 2026-03-20T12:40:11.047 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ply-3.11-14.el9.noarch 93/140 2026-03-20T12:40:11.067 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 94/140 2026-03-20T12:40:11.164 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 95/140 2026-03-20T12:40:11.178 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 96/140 2026-03-20T12:40:11.208 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 97/140 2026-03-20T12:40:11.252 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 98/140 2026-03-20T12:40:11.316 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 99/140 2026-03-20T12:40:11.338 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 100/140 2026-03-20T12:40:11.345 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 101/140 2026-03-20T12:40:11.359 INFO:teuthology.orchestra.run.vm09.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 102/140 2026-03-20T12:40:11.365 INFO:teuthology.orchestra.run.vm09.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 103/140 2026-03-20T12:40:11.369 INFO:teuthology.orchestra.run.vm09.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 104/140 2026-03-20T12:40:11.387 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 104/140 2026-03-20T12:40:11.716 INFO:teuthology.orchestra.run.vm09.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 105/140 2026-03-20T12:40:11.722 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 106/140 2026-03-20T12:40:11.766 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 106/140 2026-03-20T12:40:11.766 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-20T12:40:11.766 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-20T12:40:11.766 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:11.771 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 107/140 2026-03-20T12:40:11.938 INFO:teuthology.orchestra.run.vm06.stdout: Preparing : 1/1 2026-03-20T12:40:11.946 INFO:teuthology.orchestra.run.vm06.stdout: Installing : thrift-0.15.0-4.el9.x86_64 1/140 2026-03-20T12:40:11.949 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 2/140 2026-03-20T12:40:11.961 INFO:teuthology.orchestra.run.vm06.stdout: Installing : liboath-2.6.12-1.el9.x86_64 3/140 2026-03-20T12:40:12.141 INFO:teuthology.orchestra.run.vm06.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 4/140 2026-03-20T12:40:12.143 INFO:teuthology.orchestra.run.vm06.stdout: Upgrading : librados2-2:20.2.0-712.g70f8415b.el9.x86_64 5/140 2026-03-20T12:40:12.177 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librados2-2:20.2.0-712.g70f8415b.el9.x86_64 5/140 2026-03-20T12:40:12.186 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 6/140 2026-03-20T12:40:12.190 INFO:teuthology.orchestra.run.vm06.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/140 2026-03-20T12:40:12.194 INFO:teuthology.orchestra.run.vm06.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/140 2026-03-20T12:40:12.196 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 9/140 2026-03-20T12:40:12.202 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 10/140 2026-03-20T12:40:12.349 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 11/140 2026-03-20T12:40:12.351 INFO:teuthology.orchestra.run.vm06.stdout: Upgrading : librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T12:40:12.377 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T12:40:12.379 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 13/140 2026-03-20T12:40:12.405 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 13/140 2026-03-20T12:40:12.407 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T12:40:12.425 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T12:40:12.461 INFO:teuthology.orchestra.run.vm06.stdout: Installing : re2-1:20211101-20.el9.x86_64 15/140 2026-03-20T12:40:12.487 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 16/140 2026-03-20T12:40:12.499 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/140 2026-03-20T12:40:12.506 INFO:teuthology.orchestra.run.vm06.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 18/140 2026-03-20T12:40:12.510 INFO:teuthology.orchestra.run.vm06.stdout: Installing : lua-5.4.4-4.el9.x86_64 19/140 2026-03-20T12:40:12.516 INFO:teuthology.orchestra.run.vm06.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 20/140 2026-03-20T12:40:12.545 INFO:teuthology.orchestra.run.vm06.stdout: Installing : unzip-6.0-59.el9.x86_64 21/140 2026-03-20T12:40:12.563 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 22/140 2026-03-20T12:40:12.567 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 23/140 2026-03-20T12:40:12.576 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 24/140 2026-03-20T12:40:12.579 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 25/140 2026-03-20T12:40:12.621 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 26/140 2026-03-20T12:40:12.630 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x 27/140 2026-03-20T12:40:12.633 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9 28/140 2026-03-20T12:40:12.634 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 29/140 2026-03-20T12:40:12.690 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 29/140 2026-03-20T12:40:12.692 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 30/140 2026-03-20T12:40:12.715 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 30/140 2026-03-20T12:40:12.731 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 31/140 2026-03-20T12:40:12.739 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/140 2026-03-20T12:40:12.771 INFO:teuthology.orchestra.run.vm06.stdout: Installing : zip-3.0-35.el9.x86_64 33/140 2026-03-20T12:40:12.777 INFO:teuthology.orchestra.run.vm06.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/140 2026-03-20T12:40:12.786 INFO:teuthology.orchestra.run.vm06.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/140 2026-03-20T12:40:12.850 INFO:teuthology.orchestra.run.vm06.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/140 2026-03-20T12:40:12.869 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 37/140 2026-03-20T12:40:12.889 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-rsa-4.9-2.el9.noarch 38/140 2026-03-20T12:40:12.896 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 39/140 2026-03-20T12:40:12.907 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/140 2026-03-20T12:40:12.913 INFO:teuthology.orchestra.run.vm06.stdout: Installing : librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 41/140 2026-03-20T12:40:12.919 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/140 2026-03-20T12:40:12.938 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/140 2026-03-20T12:40:12.945 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/140 2026-03-20T12:40:12.953 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 45/140 2026-03-20T12:40:12.968 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 46/140 2026-03-20T12:40:12.983 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 47/140 2026-03-20T12:40:12.989 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 48/140 2026-03-20T12:40:13.000 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 49/140 2026-03-20T12:40:13.051 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 50/140 2026-03-20T12:40:13.430 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 51/140 2026-03-20T12:40:13.448 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 52/140 2026-03-20T12:40:13.469 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 53/140 2026-03-20T12:40:13.476 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 54/140 2026-03-20T12:40:13.480 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 55/140 2026-03-20T12:40:13.488 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 56/140 2026-03-20T12:40:13.492 INFO:teuthology.orchestra.run.vm06.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 57/140 2026-03-20T12:40:13.494 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 58/140 2026-03-20T12:40:13.527 INFO:teuthology.orchestra.run.vm06.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 59/140 2026-03-20T12:40:13.580 INFO:teuthology.orchestra.run.vm06.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 60/140 2026-03-20T12:40:13.595 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 61/140 2026-03-20T12:40:13.603 INFO:teuthology.orchestra.run.vm06.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 62/140 2026-03-20T12:40:13.609 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 63/140 2026-03-20T12:40:13.617 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 64/140 2026-03-20T12:40:13.623 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 65/140 2026-03-20T12:40:13.631 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 66/140 2026-03-20T12:40:13.637 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 67/140 2026-03-20T12:40:13.671 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 68/140 2026-03-20T12:40:13.685 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 69/140 2026-03-20T12:40:13.693 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 70/140 2026-03-20T12:40:13.703 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 71/140 2026-03-20T12:40:13.745 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 72/140 2026-03-20T12:40:14.025 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 73/140 2026-03-20T12:40:14.056 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 74/140 2026-03-20T12:40:14.087 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 75/140 2026-03-20T12:40:14.091 INFO:teuthology.orchestra.run.vm06.stdout: Installing : perl-Benchmark-1.23-483.el9.noarch 76/140 2026-03-20T12:40:14.159 INFO:teuthology.orchestra.run.vm06.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/140 2026-03-20T12:40:14.162 INFO:teuthology.orchestra.run.vm06.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/140 2026-03-20T12:40:14.184 INFO:teuthology.orchestra.run.vm06.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/140 2026-03-20T12:40:14.583 INFO:teuthology.orchestra.run.vm06.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/140 2026-03-20T12:40:14.678 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/140 2026-03-20T12:40:15.474 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/140 2026-03-20T12:40:15.502 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/140 2026-03-20T12:40:15.508 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/140 2026-03-20T12:40:15.512 INFO:teuthology.orchestra.run.vm06.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/140 2026-03-20T12:40:15.519 INFO:teuthology.orchestra.run.vm06.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 86/140 2026-03-20T12:40:15.846 INFO:teuthology.orchestra.run.vm06.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 87/140 2026-03-20T12:40:15.848 INFO:teuthology.orchestra.run.vm06.stdout: Installing : librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 88/140 2026-03-20T12:40:15.872 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 88/140 2026-03-20T12:40:15.874 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 89/140 2026-03-20T12:40:17.145 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T12:40:17.251 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T12:40:17.273 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T12:40:17.381 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 91/140 2026-03-20T12:40:17.391 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-packaging-20.9-5.el9.noarch 92/140 2026-03-20T12:40:17.409 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-ply-3.11-14.el9.noarch 93/140 2026-03-20T12:40:17.431 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 94/140 2026-03-20T12:40:17.524 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 95/140 2026-03-20T12:40:17.540 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 96/140 2026-03-20T12:40:17.570 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 97/140 2026-03-20T12:40:17.608 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 98/140 2026-03-20T12:40:17.670 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 99/140 2026-03-20T12:40:17.681 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 100/140 2026-03-20T12:40:17.687 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 101/140 2026-03-20T12:40:17.693 INFO:teuthology.orchestra.run.vm06.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 102/140 2026-03-20T12:40:17.697 INFO:teuthology.orchestra.run.vm06.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 103/140 2026-03-20T12:40:17.699 INFO:teuthology.orchestra.run.vm06.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 104/140 2026-03-20T12:40:17.718 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 104/140 2026-03-20T12:40:17.839 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 107/140 2026-03-20T12:40:17.839 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /sys 2026-03-20T12:40:17.839 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /proc 2026-03-20T12:40:17.839 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /mnt 2026-03-20T12:40:17.839 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /var/tmp 2026-03-20T12:40:17.839 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /home 2026-03-20T12:40:17.839 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /root 2026-03-20T12:40:17.839 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /tmp 2026-03-20T12:40:17.839 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:17.965 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 108/140 2026-03-20T12:40:17.988 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 108/140 2026-03-20T12:40:17.988 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:40:17.988 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-20T12:40:17.988 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-20T12:40:17.988 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-20T12:40:17.988 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:18.061 INFO:teuthology.orchestra.run.vm06.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 105/140 2026-03-20T12:40:18.067 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 106/140 2026-03-20T12:40:18.110 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 106/140 2026-03-20T12:40:18.110 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-20T12:40:18.110 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-20T12:40:18.110 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:18.115 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 107/140 2026-03-20T12:40:18.259 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 109/140 2026-03-20T12:40:18.283 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 109/140 2026-03-20T12:40:18.283 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:40:18.283 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-20T12:40:18.283 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-20T12:40:18.283 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-20T12:40:18.283 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:18.303 INFO:teuthology.orchestra.run.vm09.stdout: Installing : mailcap-2.1.49-5.el9.noarch 110/140 2026-03-20T12:40:18.307 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 111/140 2026-03-20T12:40:18.326 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T12:40:18.326 INFO:teuthology.orchestra.run.vm09.stdout:Creating group 'qat' with GID 994. 2026-03-20T12:40:18.326 INFO:teuthology.orchestra.run.vm09.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-20T12:40:18.326 INFO:teuthology.orchestra.run.vm09.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-20T12:40:18.326 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:18.366 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T12:40:18.395 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T12:40:18.395 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-20T12:40:18.395 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:18.416 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 113/140 2026-03-20T12:40:18.450 INFO:teuthology.orchestra.run.vm09.stdout: Installing : fuse-2.9.9-17.el9.x86_64 114/140 2026-03-20T12:40:18.540 INFO:teuthology.orchestra.run.vm09.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/140 2026-03-20T12:40:18.544 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 116/140 2026-03-20T12:40:18.558 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 116/140 2026-03-20T12:40:18.558 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:40:18.558 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-20T12:40:18.558 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:19.378 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 117/140 2026-03-20T12:40:19.405 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 117/140 2026-03-20T12:40:19.405 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:40:19.405 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-20T12:40:19.405 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-20T12:40:19.405 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-20T12:40:19.405 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:19.477 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: cephadm-2:20.2.0-712.g70f8415b.el9.noarch 118/140 2026-03-20T12:40:19.480 INFO:teuthology.orchestra.run.vm09.stdout: Installing : cephadm-2:20.2.0-712.g70f8415b.el9.noarch 118/140 2026-03-20T12:40:19.490 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el 119/140 2026-03-20T12:40:19.518 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.e 120/140 2026-03-20T12:40:19.521 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 121/140 2026-03-20T12:40:20.828 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 121/140 2026-03-20T12:40:20.838 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 122/140 2026-03-20T12:40:21.392 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 122/140 2026-03-20T12:40:21.395 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 123/140 2026-03-20T12:40:21.459 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 123/140 2026-03-20T12:40:21.511 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9 124/140 2026-03-20T12:40:21.514 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 125/140 2026-03-20T12:40:21.534 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 125/140 2026-03-20T12:40:21.534 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:40:21.534 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-20T12:40:21.534 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-20T12:40:21.534 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-20T12:40:21.534 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:21.549 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 126/140 2026-03-20T12:40:21.561 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 126/140 2026-03-20T12:40:21.609 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 127/140 2026-03-20T12:40:22.761 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 128/140 2026-03-20T12:40:22.765 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 129/140 2026-03-20T12:40:22.784 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 129/140 2026-03-20T12:40:22.784 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:40:22.784 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-20T12:40:22.784 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-20T12:40:22.784 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-20T12:40:22.784 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:22.797 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-immutable-object-cache-2:20.2.0-712.g70f841 130/140 2026-03-20T12:40:22.816 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-712.g70f841 130/140 2026-03-20T12:40:22.816 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:40:22.816 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-20T12:40:22.816 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:22.959 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 131/140 2026-03-20T12:40:22.979 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 131/140 2026-03-20T12:40:22.979 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:40:22.979 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-20T12:40:22.979 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-20T12:40:22.979 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-20T12:40:22.979 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:24.196 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 107/140 2026-03-20T12:40:24.196 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /sys 2026-03-20T12:40:24.196 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /proc 2026-03-20T12:40:24.196 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /mnt 2026-03-20T12:40:24.196 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /var/tmp 2026-03-20T12:40:24.196 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /home 2026-03-20T12:40:24.196 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /root 2026-03-20T12:40:24.196 INFO:teuthology.orchestra.run.vm06.stdout:skipping the directory /tmp 2026-03-20T12:40:24.196 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:24.319 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 108/140 2026-03-20T12:40:24.344 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 108/140 2026-03-20T12:40:24.344 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:40:24.344 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-20T12:40:24.344 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-20T12:40:24.344 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-20T12:40:24.344 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:24.605 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 109/140 2026-03-20T12:40:24.631 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 109/140 2026-03-20T12:40:24.631 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:40:24.631 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-20T12:40:24.631 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-20T12:40:24.631 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-20T12:40:24.631 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:24.644 INFO:teuthology.orchestra.run.vm06.stdout: Installing : mailcap-2.1.49-5.el9.noarch 110/140 2026-03-20T12:40:24.647 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 111/140 2026-03-20T12:40:24.668 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T12:40:24.668 INFO:teuthology.orchestra.run.vm06.stdout:Creating group 'qat' with GID 994. 2026-03-20T12:40:24.668 INFO:teuthology.orchestra.run.vm06.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-20T12:40:24.668 INFO:teuthology.orchestra.run.vm06.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-20T12:40:24.668 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:24.678 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T12:40:24.706 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T12:40:24.707 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-20T12:40:24.707 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:24.727 INFO:teuthology.orchestra.run.vm06.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 113/140 2026-03-20T12:40:24.758 INFO:teuthology.orchestra.run.vm06.stdout: Installing : fuse-2.9.9-17.el9.x86_64 114/140 2026-03-20T12:40:24.835 INFO:teuthology.orchestra.run.vm06.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/140 2026-03-20T12:40:24.840 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 116/140 2026-03-20T12:40:24.856 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 116/140 2026-03-20T12:40:24.856 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:40:24.856 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-20T12:40:24.856 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:25.722 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 117/140 2026-03-20T12:40:25.750 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 117/140 2026-03-20T12:40:25.750 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:40:25.750 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-20T12:40:25.750 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-20T12:40:25.750 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-20T12:40:25.750 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:25.831 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: cephadm-2:20.2.0-712.g70f8415b.el9.noarch 118/140 2026-03-20T12:40:25.834 INFO:teuthology.orchestra.run.vm06.stdout: Installing : cephadm-2:20.2.0-712.g70f8415b.el9.noarch 118/140 2026-03-20T12:40:25.843 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el 119/140 2026-03-20T12:40:25.872 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.e 120/140 2026-03-20T12:40:25.875 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 121/140 2026-03-20T12:40:27.269 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 121/140 2026-03-20T12:40:27.280 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 122/140 2026-03-20T12:40:27.537 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 132/140 2026-03-20T12:40:27.545 INFO:teuthology.orchestra.run.vm09.stdout: Installing : perl-Test-Harness-1:3.42-461.el9.noarch 133/140 2026-03-20T12:40:27.552 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_6 134/140 2026-03-20T12:40:27.565 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 135/140 2026-03-20T12:40:27.585 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 136/140 2026-03-20T12:40:27.593 INFO:teuthology.orchestra.run.vm09.stdout: Installing : s3cmd-2.4.0-1.el9.noarch 137/140 2026-03-20T12:40:27.597 INFO:teuthology.orchestra.run.vm09.stdout: Installing : bzip2-1.0.8-11.el9.x86_64 138/140 2026-03-20T12:40:27.597 INFO:teuthology.orchestra.run.vm09.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 139/140 2026-03-20T12:40:27.614 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 139/140 2026-03-20T12:40:27.614 INFO:teuthology.orchestra.run.vm09.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T12:40:27.852 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 122/140 2026-03-20T12:40:27.855 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 123/140 2026-03-20T12:40:27.925 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 123/140 2026-03-20T12:40:27.990 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9 124/140 2026-03-20T12:40:27.996 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 125/140 2026-03-20T12:40:28.024 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 125/140 2026-03-20T12:40:28.024 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:40:28.024 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-20T12:40:28.024 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-20T12:40:28.024 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-20T12:40:28.024 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:28.041 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 126/140 2026-03-20T12:40:28.058 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 126/140 2026-03-20T12:40:28.113 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 127/140 2026-03-20T12:40:29.140 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T12:40:29.140 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 1/140 2026-03-20T12:40:29.140 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 2/140 2026-03-20T12:40:29.140 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 3/140 2026-03-20T12:40:29.140 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 4/140 2026-03-20T12:40:29.140 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-immutable-object-cache-2:20.2.0-712.g70f841 5/140 2026-03-20T12:40:29.140 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 6/140 2026-03-20T12:40:29.140 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 7/140 2026-03-20T12:40:29.140 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/140 2026-03-20T12:40:29.140 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 9/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 10/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 11/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_6 13/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 15/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 16/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 17/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 18/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 19/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9 20/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x 21/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 22/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 23/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 24/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 25/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 26/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 27/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 28/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.e 29/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 30/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 31/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 32/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9 33/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 34/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el 35/140 2026-03-20T12:40:29.141 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 36/140 2026-03-20T12:40:29.143 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cephadm-2:20.2.0-712.g70f8415b.el9.noarch 37/140 2026-03-20T12:40:29.143 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : bzip2-1.0.8-11.el9.x86_64 38/140 2026-03-20T12:40:29.143 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 39/140 2026-03-20T12:40:29.143 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : fuse-2.9.9-17.el9.x86_64 40/140 2026-03-20T12:40:29.143 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 41/140 2026-03-20T12:40:29.143 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 42/140 2026-03-20T12:40:29.143 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 43/140 2026-03-20T12:40:29.143 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 44/140 2026-03-20T12:40:29.143 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 45/140 2026-03-20T12:40:29.143 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 46/140 2026-03-20T12:40:29.143 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 47/140 2026-03-20T12:40:29.143 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/140 2026-03-20T12:40:29.143 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ply-3.11-14.el9.noarch 49/140 2026-03-20T12:40:29.143 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 50/140 2026-03-20T12:40:29.143 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 51/140 2026-03-20T12:40:29.143 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 52/140 2026-03-20T12:40:29.143 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 53/140 2026-03-20T12:40:29.143 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : unzip-6.0-59.el9.x86_64 54/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : zip-3.0-35.el9.x86_64 55/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 56/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 57/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 58/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 59/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 60/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 61/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 62/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 63/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 64/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 65/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 66/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-5.4.4-4.el9.x86_64 67/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 68/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 69/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : perl-Benchmark-1.23-483.el9.noarch 70/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : perl-Test-Harness-1:3.42-461.el9.noarch 71/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 72/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 73/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 74/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 75/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 76/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 77/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 78/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 79/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 80/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 81/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 82/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 83/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 84/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 85/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 86/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 87/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 88/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 89/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 90/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 91/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 92/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 93/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 94/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 95/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 96/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 97/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 98/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 99/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 100/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 101/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 102/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 103/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 104/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 105/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 106/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 107/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 108/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 109/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 110/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 111/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 112/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 113/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 114/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 115/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 116/140 2026-03-20T12:40:29.144 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 117/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 118/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 119/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 120/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 121/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 122/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 123/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 124/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 125/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 126/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 127/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 128/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 129/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 130/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 131/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 132/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 133/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : re2-1:20211101-20.el9.x86_64 134/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : s3cmd-2.4.0-1.el9.noarch 135/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 136/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados2-2:20.2.0-712.g70f8415b.el9.x86_64 137/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 138/140 2026-03-20T12:40:29.145 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 139/140 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout:Upgraded: 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: librados2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout:Installed: 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: bzip2-1.0.8-11.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ceph-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: cephadm-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: fuse-2.9.9-17.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-20T12:40:29.247 INFO:teuthology.orchestra.run.vm09.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: lua-5.4.4-4.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: perl-Benchmark-1.23-483.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: perl-Test-Harness-1:3.42-461.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply-3.11-14.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-20T12:40:29.248 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyparsing-2.4.7-9.el9.noarch 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: re2-1:20211101-20.el9.x86_64 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: s3cmd-2.4.0-1.el9.noarch 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: unzip-6.0-59.el9.x86_64 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: zip-3.0-35.el9.x86_64 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:29.249 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-20T12:40:29.346 DEBUG:teuthology.parallel:result is None 2026-03-20T12:40:29.362 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 128/140 2026-03-20T12:40:29.366 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 129/140 2026-03-20T12:40:29.394 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 129/140 2026-03-20T12:40:29.394 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:40:29.394 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-20T12:40:29.394 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-20T12:40:29.394 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-20T12:40:29.394 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:29.407 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-immutable-object-cache-2:20.2.0-712.g70f841 130/140 2026-03-20T12:40:29.429 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-712.g70f841 130/140 2026-03-20T12:40:29.429 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:40:29.429 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-20T12:40:29.429 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:29.581 INFO:teuthology.orchestra.run.vm06.stdout: Installing : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 131/140 2026-03-20T12:40:29.605 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 131/140 2026-03-20T12:40:29.605 INFO:teuthology.orchestra.run.vm06.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T12:40:29.605 INFO:teuthology.orchestra.run.vm06.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-20T12:40:29.605 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-20T12:40:29.605 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-20T12:40:29.605 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:34.086 INFO:teuthology.orchestra.run.vm06.stdout: Installing : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 132/140 2026-03-20T12:40:34.094 INFO:teuthology.orchestra.run.vm06.stdout: Installing : perl-Test-Harness-1:3.42-461.el9.noarch 133/140 2026-03-20T12:40:34.101 INFO:teuthology.orchestra.run.vm06.stdout: Installing : libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_6 134/140 2026-03-20T12:40:34.132 INFO:teuthology.orchestra.run.vm06.stdout: Installing : rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 135/140 2026-03-20T12:40:34.151 INFO:teuthology.orchestra.run.vm06.stdout: Installing : rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 136/140 2026-03-20T12:40:34.159 INFO:teuthology.orchestra.run.vm06.stdout: Installing : s3cmd-2.4.0-1.el9.noarch 137/140 2026-03-20T12:40:34.163 INFO:teuthology.orchestra.run.vm06.stdout: Installing : bzip2-1.0.8-11.el9.x86_64 138/140 2026-03-20T12:40:34.163 INFO:teuthology.orchestra.run.vm06.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 139/140 2026-03-20T12:40:34.181 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 139/140 2026-03-20T12:40:34.181 INFO:teuthology.orchestra.run.vm06.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T12:40:35.802 INFO:teuthology.orchestra.run.vm06.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T12:40:35.802 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 1/140 2026-03-20T12:40:35.802 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 2/140 2026-03-20T12:40:35.802 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 3/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 4/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-immutable-object-cache-2:20.2.0-712.g70f841 5/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 6/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 7/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 9/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 10/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 11/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_6 13/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 15/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 16/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 17/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 18/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 19/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9 20/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x 21/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 22/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 23/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 24/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 25/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 26/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 27/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 28/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.e 29/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 30/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 31/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 32/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9 33/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 34/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el 35/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 36/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : cephadm-2:20.2.0-712.g70f8415b.el9.noarch 37/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : bzip2-1.0.8-11.el9.x86_64 38/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 39/140 2026-03-20T12:40:35.803 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : fuse-2.9.9-17.el9.x86_64 40/140 2026-03-20T12:40:35.805 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 41/140 2026-03-20T12:40:35.805 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 42/140 2026-03-20T12:40:35.805 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 43/140 2026-03-20T12:40:35.805 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 44/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 45/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 46/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 47/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-ply-3.11-14.el9.noarch 49/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 50/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 51/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 52/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 53/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : unzip-6.0-59.el9.x86_64 54/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : zip-3.0-35.el9.x86_64 55/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 56/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 57/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 58/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 59/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 60/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 61/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 62/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 63/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 64/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 65/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 66/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : lua-5.4.4-4.el9.x86_64 67/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 68/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 69/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : perl-Benchmark-1.23-483.el9.noarch 70/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : perl-Test-Harness-1:3.42-461.el9.noarch 71/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 72/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 73/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 74/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 75/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 76/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 77/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 78/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 79/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 80/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 81/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 82/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 83/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 84/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 85/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 86/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 87/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 88/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 89/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 90/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 91/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 92/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 93/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 94/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 95/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 96/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 97/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 98/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 99/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 100/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 101/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 102/140 2026-03-20T12:40:35.806 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 103/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 104/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 105/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 106/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 107/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 108/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 109/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 110/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 111/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 112/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 113/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 114/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 115/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 116/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 117/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 118/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 119/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 120/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 121/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 122/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 123/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 124/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 125/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 126/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 127/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 128/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 129/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 130/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 131/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 132/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 133/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : re2-1:20211101-20.el9.x86_64 134/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : s3cmd-2.4.0-1.el9.noarch 135/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 136/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librados2-2:20.2.0-712.g70f8415b.el9.x86_64 137/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 138/140 2026-03-20T12:40:35.807 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 139/140 2026-03-20T12:40:35.911 INFO:teuthology.orchestra.run.vm06.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T12:40:35.911 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:35.911 INFO:teuthology.orchestra.run.vm06.stdout:Upgraded: 2026-03-20T12:40:35.911 INFO:teuthology.orchestra.run.vm06.stdout: librados2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.911 INFO:teuthology.orchestra.run.vm06.stdout: librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.911 INFO:teuthology.orchestra.run.vm06.stdout:Installed: 2026-03-20T12:40:35.911 INFO:teuthology.orchestra.run.vm06.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-20T12:40:35.911 INFO:teuthology.orchestra.run.vm06.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-20T12:40:35.911 INFO:teuthology.orchestra.run.vm06.stdout: bzip2-1.0.8-11.el9.x86_64 2026-03-20T12:40:35.911 INFO:teuthology.orchestra.run.vm06.stdout: ceph-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.911 INFO:teuthology.orchestra.run.vm06.stdout: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.911 INFO:teuthology.orchestra.run.vm06.stdout: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: ceph-immutable-object-cache-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: cephadm-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: fuse-2.9.9-17.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: lua-5.4.4-4.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-20T12:40:35.912 INFO:teuthology.orchestra.run.vm06.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: perl-Benchmark-1.23-483.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: perl-Test-Harness-1:3.42-461.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-ply-3.11-14.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyparsing-2.4.7-9.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-20T12:40:35.913 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: re2-1:20211101-20.el9.x86_64 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: s3cmd-2.4.0-1.el9.noarch 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: unzip-6.0-59.el9.x86_64 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: zip-3.0-35.el9.x86_64 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:35.914 INFO:teuthology.orchestra.run.vm06.stdout:Complete! 2026-03-20T12:40:36.004 DEBUG:teuthology.parallel:result is None 2026-03-20T12:40:36.004 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-20T12:40:36.618 DEBUG:teuthology.orchestra.run.vm00:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-20T12:40:36.638 INFO:teuthology.orchestra.run.vm00.stdout:20.2.0-712.g70f8415b.el9 2026-03-20T12:40:36.639 INFO:teuthology.packaging:The installed version of ceph is 20.2.0-712.g70f8415b.el9 2026-03-20T12:40:36.639 INFO:teuthology.task.install:The correct ceph version 20.2.0-712.g70f8415b is installed. 2026-03-20T12:40:36.640 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-20T12:40:37.272 DEBUG:teuthology.orchestra.run.vm06:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-20T12:40:37.291 INFO:teuthology.orchestra.run.vm06.stdout:20.2.0-712.g70f8415b.el9 2026-03-20T12:40:37.292 INFO:teuthology.packaging:The installed version of ceph is 20.2.0-712.g70f8415b.el9 2026-03-20T12:40:37.292 INFO:teuthology.task.install:The correct ceph version 20.2.0-712.g70f8415b is installed. 2026-03-20T12:40:37.293 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-20T12:40:37.909 DEBUG:teuthology.orchestra.run.vm09:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-20T12:40:37.930 INFO:teuthology.orchestra.run.vm09.stdout:20.2.0-712.g70f8415b.el9 2026-03-20T12:40:37.930 INFO:teuthology.packaging:The installed version of ceph is 20.2.0-712.g70f8415b.el9 2026-03-20T12:40:37.930 INFO:teuthology.task.install:The correct ceph version 20.2.0-712.g70f8415b is installed. 2026-03-20T12:40:37.932 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-20T12:40:37.932 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:40:37.932 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-20T12:40:37.956 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-20T12:40:37.956 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-20T12:40:37.983 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-20T12:40:37.983 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-20T12:40:38.010 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-20T12:40:38.010 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:40:38.010 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/daemon-helper 2026-03-20T12:40:38.037 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-20T12:40:38.102 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-20T12:40:38.103 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/usr/bin/daemon-helper 2026-03-20T12:40:38.129 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-20T12:40:38.192 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-20T12:40:38.192 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/daemon-helper 2026-03-20T12:40:38.221 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-20T12:40:38.290 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-20T12:40:38.290 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:40:38.290 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-20T12:40:38.317 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-20T12:40:38.384 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-20T12:40:38.384 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-20T12:40:38.407 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-20T12:40:38.470 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-20T12:40:38.470 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-20T12:40:38.497 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-20T12:40:38.565 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-20T12:40:38.565 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:40:38.565 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/stdin-killer 2026-03-20T12:40:38.590 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-20T12:40:38.655 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-20T12:40:38.655 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/usr/bin/stdin-killer 2026-03-20T12:40:38.681 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-20T12:40:38.746 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-20T12:40:38.746 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/stdin-killer 2026-03-20T12:40:38.771 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-20T12:40:38.837 INFO:teuthology.run_tasks:Running task ceph... 2026-03-20T12:40:38.877 INFO:tasks.ceph:Making ceph log dir writeable by non-root... 2026-03-20T12:40:38.877 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 777 /var/log/ceph 2026-03-20T12:40:38.879 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod 777 /var/log/ceph 2026-03-20T12:40:38.881 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 777 /var/log/ceph 2026-03-20T12:40:38.906 INFO:tasks.ceph:Disabling ceph logrotate... 2026-03-20T12:40:38.906 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /etc/logrotate.d/ceph 2026-03-20T12:40:38.943 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f -- /etc/logrotate.d/ceph 2026-03-20T12:40:38.945 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /etc/logrotate.d/ceph 2026-03-20T12:40:38.972 INFO:tasks.ceph:Creating extra log directories... 2026-03-20T12:40:38.974 DEBUG:teuthology.orchestra.run.vm00:> sudo install -d -m0777 -- /var/log/ceph/valgrind /var/log/ceph/profiling-logger 2026-03-20T12:40:39.008 DEBUG:teuthology.orchestra.run.vm06:> sudo install -d -m0777 -- /var/log/ceph/valgrind /var/log/ceph/profiling-logger 2026-03-20T12:40:39.011 DEBUG:teuthology.orchestra.run.vm09:> sudo install -d -m0777 -- /var/log/ceph/valgrind /var/log/ceph/profiling-logger 2026-03-20T12:40:39.040 INFO:tasks.ceph:Creating ceph cluster ceph... 2026-03-20T12:40:39.040 INFO:tasks.ceph:config {'conf': {'client': {'debug rgw': 20, 'debug rgw dedup': 20, 'setgroup': 'ceph', 'setuser': 'ceph'}, 'global': {'osd_max_pg_log_entries': 10, 'osd_min_pg_log_entries': 10}, 'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'bdev async discard': True, 'bdev enable discard': True, 'bluestore allocator': 'bitmap', 'bluestore block size': 96636764160, 'bluestore fsck on mount': True, 'debug bluefs': '1/20', 'debug bluestore': '1/20', 'debug ms': 1, 'debug osd': 20, 'debug rocksdb': '4/10', 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd failsafe full ratio': 0.95, 'osd mclock iops capacity threshold hdd': 49000, 'osd objectstore': 'bluestore', 'osd shutdown pgref assert': True}}, 'fs': 'xfs', 'mkfs_options': None, 'mount_options': None, 'skip_mgr_daemons': False, 'log_ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', '\\(PG_AVAILABILITY\\)', '\\(PG_DEGRADED\\)', '\\(POOL_APP_NOT_ENABLED\\)', 'not have an application enabled'], 'cpu_profile': set(), 'cluster': 'ceph', 'mon_bind_msgr2': True, 'mon_bind_addrvec': True} 2026-03-20T12:40:39.041 INFO:tasks.ceph:ctx.config {'archive_path': '/archive/kyr-2026-03-20_12:32:34-rgw-tentacle-none-default-vps/2137', 'branch': 'tentacle', 'description': 'rgw/dedup/{beast bluestore-bitmap fixed-3-rgw ignore-pg-availability overrides supported-distros/{centos_latest} tasks/{0-install test_dedup}}', 'email': None, 'first_in_suite': False, 'flavor': 'default', 'job_id': '2137', 'last_in_suite': False, 'machine_type': 'vps', 'name': 'kyr-2026-03-20_12:32:34-rgw-tentacle-none-default-vps', 'no_nested_subset': False, 'openstack': [{'volumes': {'count': 4, 'size': 10}}], 'os_type': 'centos', 'os_version': '9.stream', 'overrides': {'admin_socket': {'branch': 'tentacle'}, 'ansible.cephlab': {'branch': 'main', 'repo': 'https://github.com/kshtsk/ceph-cm-ansible.git', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'logical_volumes': {'lv_1': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_2': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_3': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_4': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}}, 'timezone': 'UTC', 'volume_groups': {'vg_nvme': {'pvs': '/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde'}}}}, 'ceph': {'conf': {'client': {'debug rgw': 20, 'debug rgw dedup': 20, 'setgroup': 'ceph', 'setuser': 'ceph'}, 'global': {'osd_max_pg_log_entries': 10, 'osd_min_pg_log_entries': 10}, 'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'bdev async discard': True, 'bdev enable discard': True, 'bluestore allocator': 'bitmap', 'bluestore block size': 96636764160, 'bluestore fsck on mount': True, 'debug bluefs': '1/20', 'debug bluestore': '1/20', 'debug ms': 1, 'debug osd': 20, 'debug rocksdb': '4/10', 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd failsafe full ratio': 0.95, 'osd mclock iops capacity threshold hdd': 49000, 'osd objectstore': 'bluestore', 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'fs': 'xfs', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', '\\(PG_AVAILABILITY\\)', '\\(PG_DEGRADED\\)', '\\(POOL_APP_NOT_ENABLED\\)', 'not have an application enabled'], 'sha1': '70f8415b300f041766fa27faf7d5472699e32388'}, 'ceph-deploy': {'bluestore': True, 'conf': {'client': {'log file': '/var/log/ceph/ceph-$name.$pid.log'}, 'mon': {}, 'osd': {'bdev async discard': True, 'bdev enable discard': True, 'bluestore block size': 96636764160, 'bluestore fsck on mount': True, 'debug bluefs': '1/20', 'debug bluestore': '1/20', 'debug rocksdb': '4/10', 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd failsafe full ratio': 0.95, 'osd objectstore': 'bluestore'}}, 'fs': 'xfs'}, 'cephadm': {'cephadm_binary_url': 'https://download.ceph.com/rpm-20.2.0/el9/noarch/cephadm'}, 'install': {'ceph': {'flavor': 'default', 'sha1': '70f8415b300f041766fa27faf7d5472699e32388'}, 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}}, 'rgw': {'frontend': 'beast', 'storage classes': {'FROZEN': None, 'LUKEWARM': None}}, 'thrashosds': {'bdev_inject_crash': 2, 'bdev_inject_crash_probability': 0.5}, 'workunit': {'branch': 'tt-tentacle', 'sha1': '200ab49823532903ca9be3870ca957b2093ed400'}}, 'owner': 'kyr', 'priority': 1000, 'repo': 'https://github.com/ceph/ceph.git', 'roles': [['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0'], ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1'], ['client.2']], 'seed': 9234, 'sha1': '70f8415b300f041766fa27faf7d5472699e32388', 'sleep_before_teardown': 0, 'suite': 'rgw', 'suite_branch': 'tt-tentacle', 'suite_path': '/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa', 'suite_relpath': 'qa', 'suite_repo': 'https://github.com/kshtsk/ceph.git', 'suite_sha1': '200ab49823532903ca9be3870ca957b2093ed400', 'targets': {'vm00.local': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKrAQ2wALjNqRVwSitDTrwMbI2ae3qJpXamxI9dyPIIP/bthwD/JC3Bq4VeIKtmHSfTqu2jXJ3cEg/Fg3dT8IXI=', 'vm06.local': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMZn+fAEzn0fqL1dQe1nMCXgSntAM8D9CmD/gV5Abdu/BmZ6UTkHjHK9viQHu8qrVAbYbrtuZFpJKKdr8DK5SRk=', 'vm09.local': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMP0df182rq6IBJgcAFGlHAqNQW9wF5V8aAKvt4o5ioy1lGzCZoZimMEgVtMQC5xHdRgdbVGHnVH2pZjtVRYgt8='}, 'tasks': [{'internal.check_packages': None}, {'internal.buildpackages_prep': None}, {'internal.save_config': None}, {'internal.check_lock': None}, {'internal.add_remotes': None}, {'console_log': None}, {'internal.connect': None}, {'internal.push_inventory': None}, {'internal.serialize_remote_roles': None}, {'internal.check_conflict': None}, {'internal.check_ceph_data': None}, {'internal.vm_setup': None}, {'internal.base': None}, {'internal.archive_upload': None}, {'internal.archive': None}, {'internal.coredump': None}, {'internal.sudo': None}, {'internal.syslog': None}, {'internal.timer': None}, {'pcp': None}, {'selinux': None}, {'ansible.cephlab': None}, {'clock': None}, {'install': None}, {'ceph': None}, {'openssl_keys': None}, {'rgw': ['client.0', 'client.1', 'client.2']}, {'tox': ['client.0']}, {'tox': ['client.0']}, {'dedup-tests': {'client.0': {'rgw_server': 'client.0'}}}], 'teuthology': {'fragments_dropped': [], 'meta': {}, 'postmerge': []}, 'teuthology_branch': 'clyso-debian-13', 'teuthology_repo': 'https://github.com/clyso/teuthology', 'teuthology_sha1': '1c580df7a9c7c2aadc272da296344fd99f27c444', 'timestamp': '2026-03-20_12:32:34', 'tube': 'vps', 'user': 'kyr', 'verbose': False, 'worker_log': '/home/teuthos/.teuthology/dispatcher/dispatcher.vps.4188345'} 2026-03-20T12:40:39.041 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/ceph.data 2026-03-20T12:40:39.073 DEBUG:teuthology.orchestra.run.vm06:> install -d -m0755 -- /home/ubuntu/cephtest/ceph.data 2026-03-20T12:40:39.077 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/ceph.data 2026-03-20T12:40:39.093 DEBUG:teuthology.orchestra.run.vm00:> sudo install -d -m0777 -- /var/run/ceph 2026-03-20T12:40:39.131 DEBUG:teuthology.orchestra.run.vm06:> sudo install -d -m0777 -- /var/run/ceph 2026-03-20T12:40:39.135 DEBUG:teuthology.orchestra.run.vm09:> sudo install -d -m0777 -- /var/run/ceph 2026-03-20T12:40:39.163 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:40:39.163 DEBUG:teuthology.orchestra.run.vm00:> dd if=/scratch_devs of=/dev/stdout 2026-03-20T12:40:39.214 DEBUG:teuthology.misc:devs=['/dev/vg_nvme/lv_1', '/dev/vg_nvme/lv_2', '/dev/vg_nvme/lv_3', '/dev/vg_nvme/lv_4'] 2026-03-20T12:40:39.214 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vg_nvme/lv_1 2026-03-20T12:40:39.273 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vg_nvme/lv_1 -> ../dm-0 2026-03-20T12:40:39.273 INFO:teuthology.orchestra.run.vm00.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-20T12:40:39.273 INFO:teuthology.orchestra.run.vm00.stdout:Device: 6h/6d Inode: 638 Links: 1 2026-03-20T12:40:39.273 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-20T12:40:39.273 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:device_t:s0 2026-03-20T12:40:39.274 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-20 12:39:44.204330841 +0000 2026-03-20T12:40:39.274 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-20 12:38:16.986372069 +0000 2026-03-20T12:40:39.274 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-20 12:38:16.986372069 +0000 2026-03-20T12:40:39.274 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-03-20 12:38:16.986372069 +0000 2026-03-20T12:40:39.274 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vg_nvme/lv_1 of=/dev/null count=1 2026-03-20T12:40:39.338 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-20T12:40:39.339 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-20T12:40:39.339 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000140232 s, 3.7 MB/s 2026-03-20T12:40:39.339 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_1 2026-03-20T12:40:39.397 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vg_nvme/lv_2 2026-03-20T12:40:39.454 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vg_nvme/lv_2 -> ../dm-1 2026-03-20T12:40:39.454 INFO:teuthology.orchestra.run.vm00.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-20T12:40:39.454 INFO:teuthology.orchestra.run.vm00.stdout:Device: 6h/6d Inode: 700 Links: 1 2026-03-20T12:40:39.454 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-20T12:40:39.454 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:device_t:s0 2026-03-20T12:40:39.454 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-20 12:39:44.204330841 +0000 2026-03-20T12:40:39.454 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-20 12:38:17.238371550 +0000 2026-03-20T12:40:39.454 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-20 12:38:17.238371550 +0000 2026-03-20T12:40:39.454 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-03-20 12:38:17.238371550 +0000 2026-03-20T12:40:39.454 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vg_nvme/lv_2 of=/dev/null count=1 2026-03-20T12:40:39.516 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-20T12:40:39.516 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-20T12:40:39.516 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.00011252 s, 4.6 MB/s 2026-03-20T12:40:39.517 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_2 2026-03-20T12:40:39.572 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vg_nvme/lv_3 2026-03-20T12:40:39.632 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vg_nvme/lv_3 -> ../dm-2 2026-03-20T12:40:39.632 INFO:teuthology.orchestra.run.vm00.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-20T12:40:39.632 INFO:teuthology.orchestra.run.vm00.stdout:Device: 6h/6d Inode: 721 Links: 1 2026-03-20T12:40:39.632 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-20T12:40:39.632 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:device_t:s0 2026-03-20T12:40:39.632 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-20 12:39:44.205330842 +0000 2026-03-20T12:40:39.632 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-20 12:38:17.462371088 +0000 2026-03-20T12:40:39.632 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-20 12:38:17.462371088 +0000 2026-03-20T12:40:39.632 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-03-20 12:38:17.462371088 +0000 2026-03-20T12:40:39.632 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vg_nvme/lv_3 of=/dev/null count=1 2026-03-20T12:40:39.697 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-20T12:40:39.697 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-20T12:40:39.697 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000146614 s, 3.5 MB/s 2026-03-20T12:40:39.698 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_3 2026-03-20T12:40:39.756 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vg_nvme/lv_4 2026-03-20T12:40:39.813 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vg_nvme/lv_4 -> ../dm-3 2026-03-20T12:40:39.813 INFO:teuthology.orchestra.run.vm00.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-20T12:40:39.813 INFO:teuthology.orchestra.run.vm00.stdout:Device: 6h/6d Inode: 773 Links: 1 2026-03-20T12:40:39.813 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-20T12:40:39.813 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:device_t:s0 2026-03-20T12:40:39.813 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-20 12:39:44.205330842 +0000 2026-03-20T12:40:39.813 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-20 12:38:17.733370530 +0000 2026-03-20T12:40:39.813 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-20 12:38:17.733370530 +0000 2026-03-20T12:40:39.813 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-03-20 12:38:17.733370530 +0000 2026-03-20T12:40:39.813 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vg_nvme/lv_4 of=/dev/null count=1 2026-03-20T12:40:39.877 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-20T12:40:39.877 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-20T12:40:39.877 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000106299 s, 4.8 MB/s 2026-03-20T12:40:39.878 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_4 2026-03-20T12:40:39.936 INFO:tasks.ceph:osd dev map: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'} 2026-03-20T12:40:39.936 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-20T12:40:39.936 DEBUG:teuthology.orchestra.run.vm06:> dd if=/scratch_devs of=/dev/stdout 2026-03-20T12:40:39.954 DEBUG:teuthology.misc:devs=['/dev/vg_nvme/lv_1', '/dev/vg_nvme/lv_2', '/dev/vg_nvme/lv_3', '/dev/vg_nvme/lv_4'] 2026-03-20T12:40:39.954 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vg_nvme/lv_1 2026-03-20T12:40:40.011 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vg_nvme/lv_1 -> ../dm-0 2026-03-20T12:40:40.011 INFO:teuthology.orchestra.run.vm06.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-20T12:40:40.011 INFO:teuthology.orchestra.run.vm06.stdout:Device: 6h/6d Inode: 651 Links: 1 2026-03-20T12:40:40.011 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-20T12:40:40.011 INFO:teuthology.orchestra.run.vm06.stdout:Context: system_u:object_r:device_t:s0 2026-03-20T12:40:40.011 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-20 12:40:34.469039704 +0000 2026-03-20T12:40:40.011 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-20 12:38:35.352102994 +0000 2026-03-20T12:40:40.011 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-20 12:38:35.352102994 +0000 2026-03-20T12:40:40.011 INFO:teuthology.orchestra.run.vm06.stdout: Birth: 2026-03-20 12:38:35.352102994 +0000 2026-03-20T12:40:40.011 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vg_nvme/lv_1 of=/dev/null count=1 2026-03-20T12:40:40.075 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-20T12:40:40.075 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-20T12:40:40.075 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000214361 s, 2.4 MB/s 2026-03-20T12:40:40.076 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_1 2026-03-20T12:40:40.135 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vg_nvme/lv_2 2026-03-20T12:40:40.193 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vg_nvme/lv_2 -> ../dm-1 2026-03-20T12:40:40.193 INFO:teuthology.orchestra.run.vm06.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-20T12:40:40.193 INFO:teuthology.orchestra.run.vm06.stdout:Device: 6h/6d Inode: 677 Links: 1 2026-03-20T12:40:40.193 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-20T12:40:40.193 INFO:teuthology.orchestra.run.vm06.stdout:Context: system_u:object_r:device_t:s0 2026-03-20T12:40:40.193 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-20 12:40:34.469039704 +0000 2026-03-20T12:40:40.193 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-20 12:38:35.594090822 +0000 2026-03-20T12:40:40.193 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-20 12:38:35.594090822 +0000 2026-03-20T12:40:40.193 INFO:teuthology.orchestra.run.vm06.stdout: Birth: 2026-03-20 12:38:35.594090822 +0000 2026-03-20T12:40:40.194 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vg_nvme/lv_2 of=/dev/null count=1 2026-03-20T12:40:40.258 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-20T12:40:40.259 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-20T12:40:40.259 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000125946 s, 4.1 MB/s 2026-03-20T12:40:40.260 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_2 2026-03-20T12:40:40.319 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vg_nvme/lv_3 2026-03-20T12:40:40.378 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vg_nvme/lv_3 -> ../dm-2 2026-03-20T12:40:40.378 INFO:teuthology.orchestra.run.vm06.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-20T12:40:40.378 INFO:teuthology.orchestra.run.vm06.stdout:Device: 6h/6d Inode: 702 Links: 1 2026-03-20T12:40:40.378 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-20T12:40:40.378 INFO:teuthology.orchestra.run.vm06.stdout:Context: system_u:object_r:device_t:s0 2026-03-20T12:40:40.378 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-20 12:40:34.469039704 +0000 2026-03-20T12:40:40.378 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-20 12:38:35.835078700 +0000 2026-03-20T12:40:40.378 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-20 12:38:35.835078700 +0000 2026-03-20T12:40:40.378 INFO:teuthology.orchestra.run.vm06.stdout: Birth: 2026-03-20 12:38:35.835078700 +0000 2026-03-20T12:40:40.378 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vg_nvme/lv_3 of=/dev/null count=1 2026-03-20T12:40:40.440 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-20T12:40:40.441 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-20T12:40:40.441 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000143509 s, 3.6 MB/s 2026-03-20T12:40:40.441 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_3 2026-03-20T12:40:40.499 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vg_nvme/lv_4 2026-03-20T12:40:40.557 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vg_nvme/lv_4 -> ../dm-3 2026-03-20T12:40:40.557 INFO:teuthology.orchestra.run.vm06.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-20T12:40:40.557 INFO:teuthology.orchestra.run.vm06.stdout:Device: 6h/6d Inode: 763 Links: 1 2026-03-20T12:40:40.557 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-20T12:40:40.557 INFO:teuthology.orchestra.run.vm06.stdout:Context: system_u:object_r:device_t:s0 2026-03-20T12:40:40.557 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-20 12:40:34.470039705 +0000 2026-03-20T12:40:40.557 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-20 12:38:36.106065069 +0000 2026-03-20T12:40:40.557 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-20 12:38:36.106065069 +0000 2026-03-20T12:40:40.557 INFO:teuthology.orchestra.run.vm06.stdout: Birth: 2026-03-20 12:38:36.106065069 +0000 2026-03-20T12:40:40.557 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vg_nvme/lv_4 of=/dev/null count=1 2026-03-20T12:40:40.622 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-20T12:40:40.622 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-20T12:40:40.622 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000220232 s, 2.3 MB/s 2026-03-20T12:40:40.623 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_4 2026-03-20T12:40:40.682 INFO:tasks.ceph:osd dev map: {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'} 2026-03-20T12:40:40.682 INFO:tasks.ceph:remote_to_roles_to_devs: {Remote(name='ubuntu@vm00.local'): {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'}, Remote(name='ubuntu@vm06.local'): {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'}} 2026-03-20T12:40:40.682 INFO:tasks.ceph:Generating config... 2026-03-20T12:40:40.682 INFO:tasks.ceph:[client] debug rgw = 20 2026-03-20T12:40:40.683 INFO:tasks.ceph:[client] debug rgw dedup = 20 2026-03-20T12:40:40.683 INFO:tasks.ceph:[client] setgroup = ceph 2026-03-20T12:40:40.683 INFO:tasks.ceph:[client] setuser = ceph 2026-03-20T12:40:40.683 INFO:tasks.ceph:[global] osd_max_pg_log_entries = 10 2026-03-20T12:40:40.683 INFO:tasks.ceph:[global] osd_min_pg_log_entries = 10 2026-03-20T12:40:40.683 INFO:tasks.ceph:[mgr] debug mgr = 20 2026-03-20T12:40:40.683 INFO:tasks.ceph:[mgr] debug ms = 1 2026-03-20T12:40:40.683 INFO:tasks.ceph:[mon] debug mon = 20 2026-03-20T12:40:40.683 INFO:tasks.ceph:[mon] debug ms = 1 2026-03-20T12:40:40.683 INFO:tasks.ceph:[mon] debug paxos = 20 2026-03-20T12:40:40.683 INFO:tasks.ceph:[osd] bdev async discard = True 2026-03-20T12:40:40.683 INFO:tasks.ceph:[osd] bdev enable discard = True 2026-03-20T12:40:40.683 INFO:tasks.ceph:[osd] bluestore allocator = bitmap 2026-03-20T12:40:40.683 INFO:tasks.ceph:[osd] bluestore block size = 96636764160 2026-03-20T12:40:40.683 INFO:tasks.ceph:[osd] bluestore fsck on mount = True 2026-03-20T12:40:40.683 INFO:tasks.ceph:[osd] debug bluefs = 1/20 2026-03-20T12:40:40.683 INFO:tasks.ceph:[osd] debug bluestore = 1/20 2026-03-20T12:40:40.683 INFO:tasks.ceph:[osd] debug ms = 1 2026-03-20T12:40:40.683 INFO:tasks.ceph:[osd] debug osd = 20 2026-03-20T12:40:40.683 INFO:tasks.ceph:[osd] debug rocksdb = 4/10 2026-03-20T12:40:40.683 INFO:tasks.ceph:[osd] mon osd backfillfull_ratio = 0.85 2026-03-20T12:40:40.683 INFO:tasks.ceph:[osd] mon osd full ratio = 0.9 2026-03-20T12:40:40.683 INFO:tasks.ceph:[osd] mon osd nearfull ratio = 0.8 2026-03-20T12:40:40.683 INFO:tasks.ceph:[osd] osd failsafe full ratio = 0.95 2026-03-20T12:40:40.683 INFO:tasks.ceph:[osd] osd mclock iops capacity threshold hdd = 49000 2026-03-20T12:40:40.683 INFO:tasks.ceph:[osd] osd objectstore = bluestore 2026-03-20T12:40:40.683 INFO:tasks.ceph:[osd] osd shutdown pgref assert = True 2026-03-20T12:40:40.683 INFO:tasks.ceph:Setting up mon.a... 2026-03-20T12:40:40.683 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring /etc/ceph/ceph.keyring 2026-03-20T12:40:40.720 INFO:teuthology.orchestra.run.vm00.stdout:creating /etc/ceph/ceph.keyring 2026-03-20T12:40:40.723 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --gen-key --name=mon. /etc/ceph/ceph.keyring 2026-03-20T12:40:40.803 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-03-20T12:40:40.870 DEBUG:tasks.ceph:Ceph mon addresses: [('mon.a', '192.168.123.100'), ('mon.c', '[v2:192.168.123.100:3301,v1:192.168.123.100:6790]'), ('mon.b', '192.168.123.106')] 2026-03-20T12:40:40.870 DEBUG:tasks.ceph:writing out conf {'global': {'chdir': '', 'pid file': '/var/run/ceph/$cluster-$name.pid', 'auth supported': 'cephx', 'filestore xattr use omap': 'true', 'mon clock drift allowed': '1.000', 'osd crush chooseleaf type': '0', 'auth debug': 'true', 'ms die on old message': 'true', 'ms die on bug': 'true', 'mon max pg per osd': '10000', 'mon pg warn max object skew': '0', 'osd_pool_default_pg_autoscale_mode': 'off', 'osd pool default size': '2', 'mon osd allow primary affinity': 'true', 'mon osd allow pg remap': 'true', 'mon warn on legacy crush tunables': 'false', 'mon warn on crush straw calc version zero': 'false', 'mon warn on no sortbitwise': 'false', 'mon warn on osd down out interval zero': 'false', 'mon warn on too few osds': 'false', 'mon_warn_on_pool_pg_num_not_power_of_two': 'false', 'mon_warn_on_pool_no_redundancy': 'false', 'mon_allow_pool_size_one': 'true', 'osd pool default erasure code profile': 'plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd', 'osd default data pool replay window': '5', 'mon allow pool delete': 'true', 'mon cluster log file level': 'debug', 'debug asserts on shutdown': 'true', 'mon health detail to clog': 'false', 'mon host': '192.168.123.100,[v2:192.168.123.100:3301,v1:192.168.123.100:6790],192.168.123.106', 'osd_max_pg_log_entries': 10, 'osd_min_pg_log_entries': 10}, 'osd': {'osd journal size': '100', 'osd scrub load threshold': '5.0', 'osd scrub max interval': '600', 'osd mclock profile': 'high_recovery_ops', 'osd mclock skip benchmark': 'true', 'osd recover clone overlap': 'true', 'osd recovery max chunk': '1048576', 'osd debug shutdown': 'true', 'osd debug op order': 'true', 'osd debug verify stray on activate': 'true', 'osd debug trim objects': 'true', 'osd open classes on start': 'true', 'osd debug pg log writeout': 'true', 'osd deep scrub update digest min age': '30', 'osd map max advance': '10', 'journal zero on create': 'true', 'filestore ondisk finisher threads': '3', 'filestore apply finisher threads': '3', 'bdev debug aio': 'true', 'osd debug misdirected ops': 'true', 'bdev async discard': True, 'bdev enable discard': True, 'bluestore allocator': 'bitmap', 'bluestore block size': 96636764160, 'bluestore fsck on mount': True, 'debug bluefs': '1/20', 'debug bluestore': '1/20', 'debug ms': 1, 'debug osd': 20, 'debug rocksdb': '4/10', 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd failsafe full ratio': 0.95, 'osd mclock iops capacity threshold hdd': 49000, 'osd objectstore': 'bluestore', 'osd shutdown pgref assert': True}, 'mgr': {'debug ms': 1, 'debug mgr': 20, 'debug mon': '20', 'debug auth': '20', 'mon reweight min pgs per osd': '4', 'mon reweight min bytes per osd': '10', 'mgr/telemetry/nag': 'false'}, 'mon': {'debug ms': 1, 'debug mon': 20, 'debug paxos': 20, 'debug auth': '20', 'mon data avail warn': '5', 'mon mgr mkfs grace': '240', 'mon reweight min pgs per osd': '4', 'mon osd reporter subtree level': 'osd', 'mon osd prime pg temp': 'true', 'mon reweight min bytes per osd': '10', 'auth mon ticket ttl': '660', 'auth service ticket ttl': '240', 'mon_warn_on_insecure_global_id_reclaim': 'false', 'mon_warn_on_insecure_global_id_reclaim_allowed': 'false', 'mon_down_mkfs_grace': '2m', 'mon_warn_on_filestore_osds': 'false'}, 'client': {'rgw cache enabled': 'true', 'rgw enable ops log': 'true', 'rgw enable usage log': 'true', 'log file': '/var/log/ceph/$cluster-$name.$pid.log', 'admin socket': '/var/run/ceph/$cluster-$name.$pid.asok', 'debug rgw': 20, 'debug rgw dedup': 20, 'setgroup': 'ceph', 'setuser': 'ceph'}, 'mon.a': {}, 'mon.c': {}, 'mon.b': {}} 2026-03-20T12:40:40.871 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:40:40.871 DEBUG:teuthology.orchestra.run.vm00:> dd of=/home/ubuntu/cephtest/ceph.tmp.conf 2026-03-20T12:40:40.928 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage monmaptool -c /home/ubuntu/cephtest/ceph.tmp.conf --create --clobber --enable-all-features --add a 192.168.123.100 --addv c '[v2:192.168.123.100:3301,v1:192.168.123.100:6790]' --add b 192.168.123.106 --print /home/ubuntu/cephtest/ceph.monmap 2026-03-20T12:40:41.005 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-03-20T12:40:41.005 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T12:40:41.006 INFO:teuthology.orchestra.run.vm00.stdout:monmaptool: monmap file /home/ubuntu/cephtest/ceph.monmap 2026-03-20T12:40:41.006 INFO:teuthology.orchestra.run.vm00.stdout:monmaptool: generated fsid 8a1e3aca-ae1e-437d-a30d-aacd48456e6d 2026-03-20T12:40:41.006 INFO:teuthology.orchestra.run.vm00.stdout:setting min_mon_release = tentacle 2026-03-20T12:40:41.006 INFO:teuthology.orchestra.run.vm00.stdout:epoch 0 2026-03-20T12:40:41.006 INFO:teuthology.orchestra.run.vm00.stdout:fsid 8a1e3aca-ae1e-437d-a30d-aacd48456e6d 2026-03-20T12:40:41.006 INFO:teuthology.orchestra.run.vm00.stdout:last_changed 2026-03-20T12:40:41.006629+0000 2026-03-20T12:40:41.006 INFO:teuthology.orchestra.run.vm00.stdout:created 2026-03-20T12:40:41.006629+0000 2026-03-20T12:40:41.006 INFO:teuthology.orchestra.run.vm00.stdout:min_mon_release 20 (tentacle) 2026-03-20T12:40:41.006 INFO:teuthology.orchestra.run.vm00.stdout:election_strategy: 1 2026-03-20T12:40:41.006 INFO:teuthology.orchestra.run.vm00.stdout:0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-20T12:40:41.006 INFO:teuthology.orchestra.run.vm00.stdout:1: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.b 2026-03-20T12:40:41.006 INFO:teuthology.orchestra.run.vm00.stdout:2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-20T12:40:41.006 INFO:teuthology.orchestra.run.vm00.stdout:monmaptool: writing epoch 0 to /home/ubuntu/cephtest/ceph.monmap (3 monitors) 2026-03-20T12:40:41.007 DEBUG:teuthology.orchestra.run.vm00:> rm -- /home/ubuntu/cephtest/ceph.tmp.conf 2026-03-20T12:40:41.063 INFO:tasks.ceph:Writing /etc/ceph/ceph.conf for FSID 8a1e3aca-ae1e-437d-a30d-aacd48456e6d... 2026-03-20T12:40:41.063 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /etc/ceph && sudo chmod 0755 /etc/ceph && sudo tee /etc/ceph/ceph.conf && sudo chmod 0644 /etc/ceph/ceph.conf > /dev/null 2026-03-20T12:40:41.105 DEBUG:teuthology.orchestra.run.vm06:> sudo mkdir -p /etc/ceph && sudo chmod 0755 /etc/ceph && sudo tee /etc/ceph/ceph.conf && sudo chmod 0644 /etc/ceph/ceph.conf > /dev/null 2026-03-20T12:40:41.107 DEBUG:teuthology.orchestra.run.vm09:> sudo mkdir -p /etc/ceph && sudo chmod 0755 /etc/ceph && sudo tee /etc/ceph/ceph.conf && sudo chmod 0644 /etc/ceph/ceph.conf > /dev/null 2026-03-20T12:40:41.148 INFO:teuthology.orchestra.run.vm06.stdout:[global] 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: chdir = "" 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: pid file = /var/run/ceph/$cluster-$name.pid 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: auth supported = cephx 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: filestore xattr use omap = true 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: mon clock drift allowed = 1.000 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: osd crush chooseleaf type = 0 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: auth debug = true 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: ms die on old message = true 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: ms die on bug = true 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: mon max pg per osd = 10000 # >= luminous 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: mon pg warn max object skew = 0 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: # disable pg_autoscaler by default for new pools 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: osd_pool_default_pg_autoscale_mode = off 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: osd pool default size = 2 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: mon osd allow primary affinity = true 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: mon osd allow pg remap = true 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: mon warn on legacy crush tunables = false 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: mon warn on crush straw calc version zero = false 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: mon warn on no sortbitwise = false 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: mon warn on osd down out interval zero = false 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: mon warn on too few osds = false 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: mon_warn_on_pool_pg_num_not_power_of_two = false 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: mon_warn_on_pool_no_redundancy = false 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: mon_allow_pool_size_one = true 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: osd pool default erasure code profile = plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: osd default data pool replay window = 5 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.149 INFO:teuthology.orchestra.run.vm06.stdout: mon allow pool delete = true 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: mon cluster log file level = debug 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: debug asserts on shutdown = true 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: mon health detail to clog = false 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: mon host = "192.168.123.100,[v2:192.168.123.100:3301,v1:192.168.123.100:6790],192.168.123.106" 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: osd_max_pg_log_entries = 10 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: osd_min_pg_log_entries = 10 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: fsid = 8a1e3aca-ae1e-437d-a30d-aacd48456e6d 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout:[osd] 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: osd journal size = 100 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: osd scrub load threshold = 5.0 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: osd scrub max interval = 600 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: osd mclock profile = high_recovery_ops 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: osd mclock skip benchmark = true 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: osd recover clone overlap = true 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: osd recovery max chunk = 1048576 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: osd debug shutdown = true 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: osd debug op order = true 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: osd debug verify stray on activate = true 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: osd debug trim objects = true 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: osd open classes on start = true 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: osd debug pg log writeout = true 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: osd deep scrub update digest min age = 30 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: osd map max advance = 10 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: journal zero on create = true 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: filestore ondisk finisher threads = 3 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: filestore apply finisher threads = 3 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: bdev debug aio = true 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: osd debug misdirected ops = true 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: bdev async discard = True 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: bdev enable discard = True 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: bluestore allocator = bitmap 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: bluestore block size = 96636764160 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: bluestore fsck on mount = True 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: debug bluefs = 1/20 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: debug bluestore = 1/20 2026-03-20T12:40:41.150 INFO:teuthology.orchestra.run.vm06.stdout: debug ms = 1 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: debug osd = 20 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: debug rocksdb = 4/10 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: mon osd backfillfull_ratio = 0.85 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: mon osd full ratio = 0.9 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: mon osd nearfull ratio = 0.8 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: osd failsafe full ratio = 0.95 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: osd mclock iops capacity threshold hdd = 49000 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: osd objectstore = bluestore 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: osd shutdown pgref assert = True 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout:[mgr] 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: debug ms = 1 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: debug mgr = 20 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: debug mon = 20 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: debug auth = 20 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: mon reweight min pgs per osd = 4 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: mon reweight min bytes per osd = 10 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: mgr/telemetry/nag = false 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout:[mon] 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: debug ms = 1 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: debug mon = 20 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: debug paxos = 20 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: debug auth = 20 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: mon data avail warn = 5 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: mon mgr mkfs grace = 240 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: mon reweight min pgs per osd = 4 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: mon osd reporter subtree level = osd 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: mon osd prime pg temp = true 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: mon reweight min bytes per osd = 10 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: # rotate auth tickets quickly to exercise renewal paths 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: auth mon ticket ttl = 660 # 11m 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: auth service ticket ttl = 240 # 4m 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: # don't complain about insecure global_id in the test suite 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: mon_warn_on_insecure_global_id_reclaim = false 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: mon_warn_on_insecure_global_id_reclaim_allowed = false 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: # 1m isn't quite enough 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: mon_down_mkfs_grace = 2m 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: mon_warn_on_filestore_osds = false 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout:[client] 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: rgw cache enabled = true 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: rgw enable ops log = true 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: rgw enable usage log = true 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: log file = /var/log/ceph/$cluster-$name.$pid.log 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: admin socket = /var/run/ceph/$cluster-$name.$pid.asok 2026-03-20T12:40:41.151 INFO:teuthology.orchestra.run.vm06.stdout: debug rgw = 20 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm06.stdout: debug rgw dedup = 20 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm06.stdout: setgroup = ceph 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm06.stdout: setuser = ceph 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm06.stdout:[mon.a] 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm06.stdout:[mon.c] 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm06.stdout:[mon.b] 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout:[global] 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: chdir = "" 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: pid file = /var/run/ceph/$cluster-$name.pid 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: auth supported = cephx 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: filestore xattr use omap = true 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: mon clock drift allowed = 1.000 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: osd crush chooseleaf type = 0 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: auth debug = true 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: ms die on old message = true 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: ms die on bug = true 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: mon max pg per osd = 10000 # >= luminous 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: mon pg warn max object skew = 0 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: # disable pg_autoscaler by default for new pools 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: osd_pool_default_pg_autoscale_mode = off 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: osd pool default size = 2 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: mon osd allow primary affinity = true 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: mon osd allow pg remap = true 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: mon warn on legacy crush tunables = false 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: mon warn on crush straw calc version zero = false 2026-03-20T12:40:41.152 INFO:teuthology.orchestra.run.vm00.stdout: mon warn on no sortbitwise = false 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: mon warn on osd down out interval zero = false 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: mon warn on too few osds = false 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: mon_warn_on_pool_pg_num_not_power_of_two = false 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: mon_warn_on_pool_no_redundancy = false 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: mon_allow_pool_size_one = true 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: osd pool default erasure code profile = plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: osd default data pool replay window = 5 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: mon allow pool delete = true 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: mon cluster log file level = debug 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: debug asserts on shutdown = true 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: mon health detail to clog = false 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: mon host = "192.168.123.100,[v2:192.168.123.100:3301,v1:192.168.123.100:6790],192.168.123.106" 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: osd_max_pg_log_entries = 10 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: osd_min_pg_log_entries = 10 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: fsid = 8a1e3aca-ae1e-437d-a30d-aacd48456e6d 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout:[osd] 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: osd journal size = 100 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: osd scrub load threshold = 5.0 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: osd scrub max interval = 600 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: osd mclock profile = high_recovery_ops 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: osd mclock skip benchmark = true 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: osd recover clone overlap = true 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: osd recovery max chunk = 1048576 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: osd debug shutdown = true 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: osd debug op order = true 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: osd debug verify stray on activate = true 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: osd debug trim objects = true 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: osd open classes on start = true 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: osd debug pg log writeout = true 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: osd deep scrub update digest min age = 30 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: osd map max advance = 10 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: journal zero on create = true 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: filestore ondisk finisher threads = 3 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: filestore apply finisher threads = 3 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: bdev debug aio = true 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: osd debug misdirected ops = true 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: bdev async discard = True 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: bdev enable discard = True 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: bluestore allocator = bitmap 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: bluestore block size = 96636764160 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: bluestore fsck on mount = True 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: debug bluefs = 1/20 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: debug bluestore = 1/20 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: debug ms = 1 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: debug osd = 20 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: debug rocksdb = 4/10 2026-03-20T12:40:41.153 INFO:teuthology.orchestra.run.vm00.stdout: mon osd backfillfull_ratio = 0.85 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: mon osd full ratio = 0.9 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: mon osd nearfull ratio = 0.8 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: osd failsafe full ratio = 0.95 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: osd mclock iops capacity threshold hdd = 49000 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: osd objectstore = bluestore 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: osd shutdown pgref assert = True 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout:[mgr] 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: debug ms = 1 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: debug mgr = 20 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: debug mon = 20 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: debug auth = 20 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: mon reweight min pgs per osd = 4 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: mon reweight min bytes per osd = 10 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: mgr/telemetry/nag = false 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout:[mon] 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: debug ms = 1 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: debug mon = 20 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: debug paxos = 20 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: debug auth = 20 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: mon data avail warn = 5 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: mon mgr mkfs grace = 240 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: mon reweight min pgs per osd = 4 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: mon osd reporter subtree level = osd 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: mon osd prime pg temp = true 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: mon reweight min bytes per osd = 10 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: # rotate auth tickets quickly to exercise renewal paths 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: auth mon ticket ttl = 660 # 11m 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: auth service ticket ttl = 240 # 4m 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: # don't complain about insecure global_id in the test suite 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: mon_warn_on_insecure_global_id_reclaim = false 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: mon_warn_on_insecure_global_id_reclaim_allowed = false 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: # 1m isn't quite enough 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: mon_down_mkfs_grace = 2m 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: mon_warn_on_filestore_osds = false 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout:[client] 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: rgw cache enabled = true 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: rgw enable ops log = true 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: rgw enable usage log = true 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: log file = /var/log/ceph/$cluster-$name.$pid.log 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: admin socket = /var/run/ceph/$cluster-$name.$pid.asok 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: debug rgw = 20 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: debug rgw dedup = 20 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: setgroup = ceph 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout: setuser = ceph 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout:[mon.a] 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout:[mon.c] 2026-03-20T12:40:41.154 INFO:teuthology.orchestra.run.vm00.stdout:[mon.b] 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout:[global] 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: chdir = "" 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: pid file = /var/run/ceph/$cluster-$name.pid 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: auth supported = cephx 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: filestore xattr use omap = true 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: mon clock drift allowed = 1.000 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: osd crush chooseleaf type = 0 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: auth debug = true 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: ms die on old message = true 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: ms die on bug = true 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: mon max pg per osd = 10000 # >= luminous 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: mon pg warn max object skew = 0 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: # disable pg_autoscaler by default for new pools 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: osd_pool_default_pg_autoscale_mode = off 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: osd pool default size = 2 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: mon osd allow primary affinity = true 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: mon osd allow pg remap = true 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: mon warn on legacy crush tunables = false 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: mon warn on crush straw calc version zero = false 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: mon warn on no sortbitwise = false 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: mon warn on osd down out interval zero = false 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: mon warn on too few osds = false 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: mon_warn_on_pool_pg_num_not_power_of_two = false 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: mon_warn_on_pool_no_redundancy = false 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: mon_allow_pool_size_one = true 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: osd pool default erasure code profile = plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: osd default data pool replay window = 5 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: mon allow pool delete = true 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: mon cluster log file level = debug 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: debug asserts on shutdown = true 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: mon health detail to clog = false 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: mon host = "192.168.123.100,[v2:192.168.123.100:3301,v1:192.168.123.100:6790],192.168.123.106" 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: osd_max_pg_log_entries = 10 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: osd_min_pg_log_entries = 10 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: fsid = 8a1e3aca-ae1e-437d-a30d-aacd48456e6d 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout:[osd] 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: osd journal size = 100 2026-03-20T12:40:41.155 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: osd scrub load threshold = 5.0 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: osd scrub max interval = 600 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: osd mclock profile = high_recovery_ops 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: osd mclock skip benchmark = true 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: osd recover clone overlap = true 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: osd recovery max chunk = 1048576 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: osd debug shutdown = true 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: osd debug op order = true 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: osd debug verify stray on activate = true 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: osd debug trim objects = true 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: osd open classes on start = true 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: osd debug pg log writeout = true 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: osd deep scrub update digest min age = 30 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: osd map max advance = 10 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: journal zero on create = true 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: filestore ondisk finisher threads = 3 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: filestore apply finisher threads = 3 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: bdev debug aio = true 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: osd debug misdirected ops = true 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: bdev async discard = True 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: bdev enable discard = True 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: bluestore allocator = bitmap 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: bluestore block size = 96636764160 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: bluestore fsck on mount = True 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: debug bluefs = 1/20 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: debug bluestore = 1/20 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: debug ms = 1 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: debug osd = 20 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: debug rocksdb = 4/10 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: mon osd backfillfull_ratio = 0.85 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: mon osd full ratio = 0.9 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: mon osd nearfull ratio = 0.8 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: osd failsafe full ratio = 0.95 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: osd mclock iops capacity threshold hdd = 49000 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: osd objectstore = bluestore 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: osd shutdown pgref assert = True 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout:[mgr] 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: debug ms = 1 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: debug mgr = 20 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: debug mon = 20 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: debug auth = 20 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: mon reweight min pgs per osd = 4 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: mon reweight min bytes per osd = 10 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: mgr/telemetry/nag = false 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout:[mon] 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: debug ms = 1 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: debug mon = 20 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: debug paxos = 20 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: debug auth = 20 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: mon data avail warn = 5 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: mon mgr mkfs grace = 240 2026-03-20T12:40:41.156 INFO:teuthology.orchestra.run.vm09.stdout: mon reweight min pgs per osd = 4 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: mon osd reporter subtree level = osd 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: mon osd prime pg temp = true 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: mon reweight min bytes per osd = 10 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: # rotate auth tickets quickly to exercise renewal paths 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: auth mon ticket ttl = 660 # 11m 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: auth service ticket ttl = 240 # 4m 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: # don't complain about insecure global_id in the test suite 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: mon_warn_on_insecure_global_id_reclaim = false 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: mon_warn_on_insecure_global_id_reclaim_allowed = false 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: # 1m isn't quite enough 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: mon_down_mkfs_grace = 2m 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: mon_warn_on_filestore_osds = false 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout:[client] 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: rgw cache enabled = true 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: rgw enable ops log = true 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: rgw enable usage log = true 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: log file = /var/log/ceph/$cluster-$name.$pid.log 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: admin socket = /var/run/ceph/$cluster-$name.$pid.asok 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: debug rgw = 20 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: debug rgw dedup = 20 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: setgroup = ceph 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout: setuser = ceph 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout:[mon.a] 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout:[mon.c] 2026-03-20T12:40:41.157 INFO:teuthology.orchestra.run.vm09.stdout:[mon.b] 2026-03-20T12:40:41.163 INFO:tasks.ceph:Creating admin key on mon.a... 2026-03-20T12:40:41.163 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /etc/ceph/ceph.keyring 2026-03-20T12:40:41.245 INFO:tasks.ceph:Copying monmap to all nodes... 2026-03-20T12:40:41.246 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:40:41.246 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.keyring of=/dev/stdout 2026-03-20T12:40:41.260 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:40:41.260 DEBUG:teuthology.orchestra.run.vm00:> dd if=/home/ubuntu/cephtest/ceph.monmap of=/dev/stdout 2026-03-20T12:40:41.316 INFO:tasks.ceph:Sending monmap to node ubuntu@vm00.local 2026-03-20T12:40:41.316 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:40:41.316 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.keyring 2026-03-20T12:40:41.316 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-03-20T12:40:41.389 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:40:41.389 DEBUG:teuthology.orchestra.run.vm00:> dd of=/home/ubuntu/cephtest/ceph.monmap 2026-03-20T12:40:41.444 INFO:tasks.ceph:Sending monmap to node ubuntu@vm06.local 2026-03-20T12:40:41.444 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-20T12:40:41.444 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/ceph/ceph.keyring 2026-03-20T12:40:41.444 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-03-20T12:40:41.479 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-20T12:40:41.479 DEBUG:teuthology.orchestra.run.vm06:> dd of=/home/ubuntu/cephtest/ceph.monmap 2026-03-20T12:40:41.535 INFO:tasks.ceph:Sending monmap to node ubuntu@vm09.local 2026-03-20T12:40:41.536 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-20T12:40:41.536 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.keyring 2026-03-20T12:40:41.536 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-03-20T12:40:41.571 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-20T12:40:41.571 DEBUG:teuthology.orchestra.run.vm09:> dd of=/home/ubuntu/cephtest/ceph.monmap 2026-03-20T12:40:41.628 INFO:tasks.ceph:Setting up mon nodes... 2026-03-20T12:40:41.628 INFO:tasks.ceph:Setting up mgr nodes... 2026-03-20T12:40:41.628 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/mgr/ceph-y && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=mgr.y /var/lib/ceph/mgr/ceph-y/keyring 2026-03-20T12:40:41.675 INFO:teuthology.orchestra.run.vm00.stdout:creating /var/lib/ceph/mgr/ceph-y/keyring 2026-03-20T12:40:41.677 DEBUG:teuthology.orchestra.run.vm06:> sudo mkdir -p /var/lib/ceph/mgr/ceph-x && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=mgr.x /var/lib/ceph/mgr/ceph-x/keyring 2026-03-20T12:40:41.733 INFO:teuthology.orchestra.run.vm06.stdout:creating /var/lib/ceph/mgr/ceph-x/keyring 2026-03-20T12:40:41.736 INFO:tasks.ceph:Setting up mds nodes... 2026-03-20T12:40:41.736 INFO:tasks.ceph_client:Setting up client nodes... 2026-03-20T12:40:41.736 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=client.0 /etc/ceph/ceph.client.0.keyring && sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-20T12:40:41.771 INFO:teuthology.orchestra.run.vm00.stdout:creating /etc/ceph/ceph.client.0.keyring 2026-03-20T12:40:41.781 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=client.1 /etc/ceph/ceph.client.1.keyring && sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-20T12:40:41.822 INFO:teuthology.orchestra.run.vm06.stdout:creating /etc/ceph/ceph.client.1.keyring 2026-03-20T12:40:41.838 DEBUG:teuthology.orchestra.run.vm09:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=client.2 /etc/ceph/ceph.client.2.keyring && sudo chmod 0644 /etc/ceph/ceph.client.2.keyring 2026-03-20T12:40:41.880 INFO:teuthology.orchestra.run.vm09.stdout:creating /etc/ceph/ceph.client.2.keyring 2026-03-20T12:40:41.894 INFO:tasks.ceph:Running mkfs on osd nodes... 2026-03-20T12:40:41.894 INFO:tasks.ceph:ctx.disk_config.remote_to_roles_to_dev: {Remote(name='ubuntu@vm00.local'): {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'}, Remote(name='ubuntu@vm06.local'): {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'}} 2026-03-20T12:40:41.894 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/osd/ceph-0 2026-03-20T12:40:41.919 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'} 2026-03-20T12:40:41.920 INFO:tasks.ceph:role: osd.0 2026-03-20T12:40:41.920 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_1 on ubuntu@vm00.local 2026-03-20T12:40:41.920 DEBUG:teuthology.orchestra.run.vm00:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_1 2026-03-20T12:40:41.985 INFO:teuthology.orchestra.run.vm00.stdout:meta-data=/dev/vg_nvme/lv_1 isize=2048 agcount=4, agsize=1310464 blks 2026-03-20T12:40:41.985 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-20T12:40:41.985 INFO:teuthology.orchestra.run.vm00.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-20T12:40:41.985 INFO:teuthology.orchestra.run.vm00.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-20T12:40:41.985 INFO:teuthology.orchestra.run.vm00.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-20T12:40:41.985 INFO:teuthology.orchestra.run.vm00.stdout: = sunit=0 swidth=0 blks 2026-03-20T12:40:41.985 INFO:teuthology.orchestra.run.vm00.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-20T12:40:41.985 INFO:teuthology.orchestra.run.vm00.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-20T12:40:41.985 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-20T12:40:41.985 INFO:teuthology.orchestra.run.vm00.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-20T12:40:41.990 INFO:teuthology.orchestra.run.vm00.stdout:Discarding blocks...Done. 2026-03-20T12:40:41.991 INFO:tasks.ceph:mount /dev/vg_nvme/lv_1 on ubuntu@vm00.local -o noatime 2026-03-20T12:40:41.992 DEBUG:teuthology.orchestra.run.vm00:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_1 /var/lib/ceph/osd/ceph-0 2026-03-20T12:40:42.063 DEBUG:teuthology.orchestra.run.vm00:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-0 2026-03-20T12:40:42.133 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/osd/ceph-1 2026-03-20T12:40:42.202 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'} 2026-03-20T12:40:42.202 INFO:tasks.ceph:role: osd.1 2026-03-20T12:40:42.202 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_2 on ubuntu@vm00.local 2026-03-20T12:40:42.202 DEBUG:teuthology.orchestra.run.vm00:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_2 2026-03-20T12:40:42.269 INFO:teuthology.orchestra.run.vm00.stdout:meta-data=/dev/vg_nvme/lv_2 isize=2048 agcount=4, agsize=1310464 blks 2026-03-20T12:40:42.269 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-20T12:40:42.269 INFO:teuthology.orchestra.run.vm00.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-20T12:40:42.269 INFO:teuthology.orchestra.run.vm00.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-20T12:40:42.269 INFO:teuthology.orchestra.run.vm00.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-20T12:40:42.269 INFO:teuthology.orchestra.run.vm00.stdout: = sunit=0 swidth=0 blks 2026-03-20T12:40:42.269 INFO:teuthology.orchestra.run.vm00.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-20T12:40:42.269 INFO:teuthology.orchestra.run.vm00.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-20T12:40:42.269 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-20T12:40:42.269 INFO:teuthology.orchestra.run.vm00.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-20T12:40:42.273 INFO:teuthology.orchestra.run.vm00.stdout:Discarding blocks...Done. 2026-03-20T12:40:42.275 INFO:tasks.ceph:mount /dev/vg_nvme/lv_2 on ubuntu@vm00.local -o noatime 2026-03-20T12:40:42.275 DEBUG:teuthology.orchestra.run.vm00:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_2 /var/lib/ceph/osd/ceph-1 2026-03-20T12:40:42.349 DEBUG:teuthology.orchestra.run.vm00:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-1 2026-03-20T12:40:42.421 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/osd/ceph-2 2026-03-20T12:40:42.487 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'} 2026-03-20T12:40:42.487 INFO:tasks.ceph:role: osd.2 2026-03-20T12:40:42.487 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_3 on ubuntu@vm00.local 2026-03-20T12:40:42.487 DEBUG:teuthology.orchestra.run.vm00:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_3 2026-03-20T12:40:42.552 INFO:teuthology.orchestra.run.vm00.stdout:meta-data=/dev/vg_nvme/lv_3 isize=2048 agcount=4, agsize=1310464 blks 2026-03-20T12:40:42.553 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-20T12:40:42.553 INFO:teuthology.orchestra.run.vm00.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-20T12:40:42.553 INFO:teuthology.orchestra.run.vm00.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-20T12:40:42.553 INFO:teuthology.orchestra.run.vm00.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-20T12:40:42.553 INFO:teuthology.orchestra.run.vm00.stdout: = sunit=0 swidth=0 blks 2026-03-20T12:40:42.553 INFO:teuthology.orchestra.run.vm00.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-20T12:40:42.553 INFO:teuthology.orchestra.run.vm00.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-20T12:40:42.553 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-20T12:40:42.553 INFO:teuthology.orchestra.run.vm00.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-20T12:40:42.557 INFO:teuthology.orchestra.run.vm00.stdout:Discarding blocks...Done. 2026-03-20T12:40:42.559 INFO:tasks.ceph:mount /dev/vg_nvme/lv_3 on ubuntu@vm00.local -o noatime 2026-03-20T12:40:42.559 DEBUG:teuthology.orchestra.run.vm00:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_3 /var/lib/ceph/osd/ceph-2 2026-03-20T12:40:42.632 DEBUG:teuthology.orchestra.run.vm00:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-2 2026-03-20T12:40:42.700 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/osd/ceph-3 2026-03-20T12:40:42.764 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'} 2026-03-20T12:40:42.764 INFO:tasks.ceph:role: osd.3 2026-03-20T12:40:42.764 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_4 on ubuntu@vm00.local 2026-03-20T12:40:42.764 DEBUG:teuthology.orchestra.run.vm00:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_4 2026-03-20T12:40:42.829 INFO:teuthology.orchestra.run.vm00.stdout:meta-data=/dev/vg_nvme/lv_4 isize=2048 agcount=4, agsize=1310464 blks 2026-03-20T12:40:42.829 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-20T12:40:42.829 INFO:teuthology.orchestra.run.vm00.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-20T12:40:42.829 INFO:teuthology.orchestra.run.vm00.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-20T12:40:42.829 INFO:teuthology.orchestra.run.vm00.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-20T12:40:42.829 INFO:teuthology.orchestra.run.vm00.stdout: = sunit=0 swidth=0 blks 2026-03-20T12:40:42.829 INFO:teuthology.orchestra.run.vm00.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-20T12:40:42.829 INFO:teuthology.orchestra.run.vm00.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-20T12:40:42.829 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-20T12:40:42.829 INFO:teuthology.orchestra.run.vm00.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-20T12:40:42.834 INFO:teuthology.orchestra.run.vm00.stdout:Discarding blocks...Done. 2026-03-20T12:40:42.837 INFO:tasks.ceph:mount /dev/vg_nvme/lv_4 on ubuntu@vm00.local -o noatime 2026-03-20T12:40:42.837 DEBUG:teuthology.orchestra.run.vm00:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_4 /var/lib/ceph/osd/ceph-3 2026-03-20T12:40:42.908 DEBUG:teuthology.orchestra.run.vm00:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-3 2026-03-20T12:40:42.977 DEBUG:teuthology.orchestra.run.vm00:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-20T12:40:43.063 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:40:43.062+0000 7f3d23612900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-0/keyring: can't open /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory 2026-03-20T12:40:43.063 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:40:43.063+0000 7f3d23612900 -1 created new key in keyring /var/lib/ceph/osd/ceph-0/keyring 2026-03-20T12:40:43.063 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:40:43.063+0000 7f3d23612900 -1 bdev(0x55cbc9841800 /var/lib/ceph/osd/ceph-0/block) open stat got: (1) Operation not permitted 2026-03-20T12:40:43.063 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:40:43.063+0000 7f3d23612900 -1 bluestore(/var/lib/ceph/osd/ceph-0) _read_fsid unparsable uuid 2026-03-20T12:40:43.767 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-20T12:40:43.795 DEBUG:teuthology.orchestra.run.vm00:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 1 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-20T12:40:43.878 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:40:43.878+0000 7f944ba67900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-1/keyring: can't open /var/lib/ceph/osd/ceph-1/keyring: (2) No such file or directory 2026-03-20T12:40:43.879 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:40:43.878+0000 7f944ba67900 -1 created new key in keyring /var/lib/ceph/osd/ceph-1/keyring 2026-03-20T12:40:43.879 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:40:43.878+0000 7f944ba67900 -1 bdev(0x55c20890b800 /var/lib/ceph/osd/ceph-1/block) open stat got: (1) Operation not permitted 2026-03-20T12:40:43.879 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:40:43.879+0000 7f944ba67900 -1 bluestore(/var/lib/ceph/osd/ceph-1) _read_fsid unparsable uuid 2026-03-20T12:40:44.624 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-20T12:40:44.691 DEBUG:teuthology.orchestra.run.vm00:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 2 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-20T12:40:44.771 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:40:44.771+0000 7f15eb811900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-2/keyring: can't open /var/lib/ceph/osd/ceph-2/keyring: (2) No such file or directory 2026-03-20T12:40:44.771 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:40:44.771+0000 7f15eb811900 -1 created new key in keyring /var/lib/ceph/osd/ceph-2/keyring 2026-03-20T12:40:44.771 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:40:44.771+0000 7f15eb811900 -1 bdev(0x55742d351800 /var/lib/ceph/osd/ceph-2/block) open stat got: (1) Operation not permitted 2026-03-20T12:40:44.771 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:40:44.771+0000 7f15eb811900 -1 bluestore(/var/lib/ceph/osd/ceph-2) _read_fsid unparsable uuid 2026-03-20T12:40:45.530 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-20T12:40:45.598 DEBUG:teuthology.orchestra.run.vm00:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 3 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-20T12:40:45.678 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:40:45.678+0000 7f3271dda900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-3/keyring: can't open /var/lib/ceph/osd/ceph-3/keyring: (2) No such file or directory 2026-03-20T12:40:45.679 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:40:45.679+0000 7f3271dda900 -1 created new key in keyring /var/lib/ceph/osd/ceph-3/keyring 2026-03-20T12:40:45.679 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:40:45.679+0000 7f3271dda900 -1 bdev(0x55b22a10d800 /var/lib/ceph/osd/ceph-3/block) open stat got: (1) Operation not permitted 2026-03-20T12:40:45.679 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:40:45.679+0000 7f3271dda900 -1 bluestore(/var/lib/ceph/osd/ceph-3) _read_fsid unparsable uuid 2026-03-20T12:40:46.382 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-20T12:40:46.452 DEBUG:teuthology.orchestra.run.vm06:> sudo mkdir -p /var/lib/ceph/osd/ceph-4 2026-03-20T12:40:46.478 INFO:tasks.ceph:roles_to_devs: {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'} 2026-03-20T12:40:46.478 INFO:tasks.ceph:role: osd.4 2026-03-20T12:40:46.478 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_1 on ubuntu@vm06.local 2026-03-20T12:40:46.478 DEBUG:teuthology.orchestra.run.vm06:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_1 2026-03-20T12:40:46.544 INFO:teuthology.orchestra.run.vm06.stdout:meta-data=/dev/vg_nvme/lv_1 isize=2048 agcount=4, agsize=1310464 blks 2026-03-20T12:40:46.544 INFO:teuthology.orchestra.run.vm06.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-20T12:40:46.545 INFO:teuthology.orchestra.run.vm06.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-20T12:40:46.545 INFO:teuthology.orchestra.run.vm06.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-20T12:40:46.545 INFO:teuthology.orchestra.run.vm06.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-20T12:40:46.545 INFO:teuthology.orchestra.run.vm06.stdout: = sunit=0 swidth=0 blks 2026-03-20T12:40:46.545 INFO:teuthology.orchestra.run.vm06.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-20T12:40:46.545 INFO:teuthology.orchestra.run.vm06.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-20T12:40:46.545 INFO:teuthology.orchestra.run.vm06.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-20T12:40:46.545 INFO:teuthology.orchestra.run.vm06.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-20T12:40:46.549 INFO:teuthology.orchestra.run.vm06.stdout:Discarding blocks...Done. 2026-03-20T12:40:46.552 INFO:tasks.ceph:mount /dev/vg_nvme/lv_1 on ubuntu@vm06.local -o noatime 2026-03-20T12:40:46.552 DEBUG:teuthology.orchestra.run.vm06:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_1 /var/lib/ceph/osd/ceph-4 2026-03-20T12:40:46.624 DEBUG:teuthology.orchestra.run.vm06:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-4 2026-03-20T12:40:46.692 DEBUG:teuthology.orchestra.run.vm06:> sudo mkdir -p /var/lib/ceph/osd/ceph-5 2026-03-20T12:40:46.759 INFO:tasks.ceph:roles_to_devs: {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'} 2026-03-20T12:40:46.759 INFO:tasks.ceph:role: osd.5 2026-03-20T12:40:46.759 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_2 on ubuntu@vm06.local 2026-03-20T12:40:46.760 DEBUG:teuthology.orchestra.run.vm06:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_2 2026-03-20T12:40:46.825 INFO:teuthology.orchestra.run.vm06.stdout:meta-data=/dev/vg_nvme/lv_2 isize=2048 agcount=4, agsize=1310464 blks 2026-03-20T12:40:46.825 INFO:teuthology.orchestra.run.vm06.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-20T12:40:46.825 INFO:teuthology.orchestra.run.vm06.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-20T12:40:46.825 INFO:teuthology.orchestra.run.vm06.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-20T12:40:46.825 INFO:teuthology.orchestra.run.vm06.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-20T12:40:46.825 INFO:teuthology.orchestra.run.vm06.stdout: = sunit=0 swidth=0 blks 2026-03-20T12:40:46.825 INFO:teuthology.orchestra.run.vm06.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-20T12:40:46.825 INFO:teuthology.orchestra.run.vm06.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-20T12:40:46.825 INFO:teuthology.orchestra.run.vm06.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-20T12:40:46.825 INFO:teuthology.orchestra.run.vm06.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-20T12:40:46.829 INFO:teuthology.orchestra.run.vm06.stdout:Discarding blocks...Done. 2026-03-20T12:40:46.831 INFO:tasks.ceph:mount /dev/vg_nvme/lv_2 on ubuntu@vm06.local -o noatime 2026-03-20T12:40:46.831 DEBUG:teuthology.orchestra.run.vm06:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_2 /var/lib/ceph/osd/ceph-5 2026-03-20T12:40:46.897 DEBUG:teuthology.orchestra.run.vm06:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-5 2026-03-20T12:40:46.965 DEBUG:teuthology.orchestra.run.vm06:> sudo mkdir -p /var/lib/ceph/osd/ceph-6 2026-03-20T12:40:47.031 INFO:tasks.ceph:roles_to_devs: {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'} 2026-03-20T12:40:47.031 INFO:tasks.ceph:role: osd.6 2026-03-20T12:40:47.031 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_3 on ubuntu@vm06.local 2026-03-20T12:40:47.031 DEBUG:teuthology.orchestra.run.vm06:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_3 2026-03-20T12:40:47.094 INFO:teuthology.orchestra.run.vm06.stdout:meta-data=/dev/vg_nvme/lv_3 isize=2048 agcount=4, agsize=1310464 blks 2026-03-20T12:40:47.094 INFO:teuthology.orchestra.run.vm06.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-20T12:40:47.094 INFO:teuthology.orchestra.run.vm06.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-20T12:40:47.094 INFO:teuthology.orchestra.run.vm06.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-20T12:40:47.094 INFO:teuthology.orchestra.run.vm06.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-20T12:40:47.094 INFO:teuthology.orchestra.run.vm06.stdout: = sunit=0 swidth=0 blks 2026-03-20T12:40:47.094 INFO:teuthology.orchestra.run.vm06.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-20T12:40:47.094 INFO:teuthology.orchestra.run.vm06.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-20T12:40:47.094 INFO:teuthology.orchestra.run.vm06.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-20T12:40:47.094 INFO:teuthology.orchestra.run.vm06.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-20T12:40:47.098 INFO:teuthology.orchestra.run.vm06.stdout:Discarding blocks...Done. 2026-03-20T12:40:47.100 INFO:tasks.ceph:mount /dev/vg_nvme/lv_3 on ubuntu@vm06.local -o noatime 2026-03-20T12:40:47.100 DEBUG:teuthology.orchestra.run.vm06:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_3 /var/lib/ceph/osd/ceph-6 2026-03-20T12:40:47.169 DEBUG:teuthology.orchestra.run.vm06:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-6 2026-03-20T12:40:47.239 DEBUG:teuthology.orchestra.run.vm06:> sudo mkdir -p /var/lib/ceph/osd/ceph-7 2026-03-20T12:40:47.303 INFO:tasks.ceph:roles_to_devs: {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'} 2026-03-20T12:40:47.303 INFO:tasks.ceph:role: osd.7 2026-03-20T12:40:47.304 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_4 on ubuntu@vm06.local 2026-03-20T12:40:47.304 DEBUG:teuthology.orchestra.run.vm06:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_4 2026-03-20T12:40:47.368 INFO:teuthology.orchestra.run.vm06.stdout:meta-data=/dev/vg_nvme/lv_4 isize=2048 agcount=4, agsize=1310464 blks 2026-03-20T12:40:47.368 INFO:teuthology.orchestra.run.vm06.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-20T12:40:47.368 INFO:teuthology.orchestra.run.vm06.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-20T12:40:47.369 INFO:teuthology.orchestra.run.vm06.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-20T12:40:47.369 INFO:teuthology.orchestra.run.vm06.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-20T12:40:47.369 INFO:teuthology.orchestra.run.vm06.stdout: = sunit=0 swidth=0 blks 2026-03-20T12:40:47.369 INFO:teuthology.orchestra.run.vm06.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-20T12:40:47.369 INFO:teuthology.orchestra.run.vm06.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-20T12:40:47.369 INFO:teuthology.orchestra.run.vm06.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-20T12:40:47.369 INFO:teuthology.orchestra.run.vm06.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-20T12:40:47.373 INFO:teuthology.orchestra.run.vm06.stdout:Discarding blocks...Done. 2026-03-20T12:40:47.375 INFO:tasks.ceph:mount /dev/vg_nvme/lv_4 on ubuntu@vm06.local -o noatime 2026-03-20T12:40:47.375 DEBUG:teuthology.orchestra.run.vm06:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_4 /var/lib/ceph/osd/ceph-7 2026-03-20T12:40:47.446 DEBUG:teuthology.orchestra.run.vm06:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-7 2026-03-20T12:40:47.513 DEBUG:teuthology.orchestra.run.vm06:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 4 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-20T12:40:47.597 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:40:47.598+0000 7f1d0f24d900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-4/keyring: can't open /var/lib/ceph/osd/ceph-4/keyring: (2) No such file or directory 2026-03-20T12:40:47.598 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:40:47.598+0000 7f1d0f24d900 -1 created new key in keyring /var/lib/ceph/osd/ceph-4/keyring 2026-03-20T12:40:47.598 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:40:47.598+0000 7f1d0f24d900 -1 bdev(0x56136af59800 /var/lib/ceph/osd/ceph-4/block) open stat got: (1) Operation not permitted 2026-03-20T12:40:47.598 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:40:47.598+0000 7f1d0f24d900 -1 bluestore(/var/lib/ceph/osd/ceph-4) _read_fsid unparsable uuid 2026-03-20T12:40:48.306 DEBUG:teuthology.orchestra.run.vm06:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-20T12:40:48.334 DEBUG:teuthology.orchestra.run.vm06:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 5 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-20T12:40:48.416 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:40:48.416+0000 7fa47a593900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-5/keyring: can't open /var/lib/ceph/osd/ceph-5/keyring: (2) No such file or directory 2026-03-20T12:40:48.416 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:40:48.416+0000 7fa47a593900 -1 created new key in keyring /var/lib/ceph/osd/ceph-5/keyring 2026-03-20T12:40:48.416 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:40:48.416+0000 7fa47a593900 -1 bdev(0x5648e0d0b800 /var/lib/ceph/osd/ceph-5/block) open stat got: (1) Operation not permitted 2026-03-20T12:40:48.416 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:40:48.416+0000 7fa47a593900 -1 bluestore(/var/lib/ceph/osd/ceph-5) _read_fsid unparsable uuid 2026-03-20T12:40:49.200 DEBUG:teuthology.orchestra.run.vm06:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-20T12:40:49.271 DEBUG:teuthology.orchestra.run.vm06:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 6 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-20T12:40:49.350 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:40:49.350+0000 7f7f86220900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-6/keyring: can't open /var/lib/ceph/osd/ceph-6/keyring: (2) No such file or directory 2026-03-20T12:40:49.351 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:40:49.351+0000 7f7f86220900 -1 created new key in keyring /var/lib/ceph/osd/ceph-6/keyring 2026-03-20T12:40:49.351 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:40:49.351+0000 7f7f86220900 -1 bdev(0x5574d61f9800 /var/lib/ceph/osd/ceph-6/block) open stat got: (1) Operation not permitted 2026-03-20T12:40:49.351 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:40:49.351+0000 7f7f86220900 -1 bluestore(/var/lib/ceph/osd/ceph-6) _read_fsid unparsable uuid 2026-03-20T12:40:50.069 DEBUG:teuthology.orchestra.run.vm06:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-6 2026-03-20T12:40:50.137 DEBUG:teuthology.orchestra.run.vm06:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 7 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-20T12:40:50.215 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:40:50.215+0000 7fb8d90ad900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-7/keyring: can't open /var/lib/ceph/osd/ceph-7/keyring: (2) No such file or directory 2026-03-20T12:40:50.216 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:40:50.216+0000 7fb8d90ad900 -1 created new key in keyring /var/lib/ceph/osd/ceph-7/keyring 2026-03-20T12:40:50.216 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:40:50.216+0000 7fb8d90ad900 -1 bdev(0x555b57fa1800 /var/lib/ceph/osd/ceph-7/block) open stat got: (1) Operation not permitted 2026-03-20T12:40:50.216 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:40:50.216+0000 7fb8d90ad900 -1 bluestore(/var/lib/ceph/osd/ceph-7) _read_fsid unparsable uuid 2026-03-20T12:40:50.888 DEBUG:teuthology.orchestra.run.vm06:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-7 2026-03-20T12:40:50.955 INFO:tasks.ceph:Reading keys from all nodes... 2026-03-20T12:40:50.955 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:40:50.955 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/mgr/ceph-y/keyring of=/dev/stdout 2026-03-20T12:40:50.981 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:40:50.981 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-0/keyring of=/dev/stdout 2026-03-20T12:40:51.046 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:40:51.046 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-1/keyring of=/dev/stdout 2026-03-20T12:40:51.111 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:40:51.111 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-2/keyring of=/dev/stdout 2026-03-20T12:40:51.176 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:40:51.176 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-3/keyring of=/dev/stdout 2026-03-20T12:40:51.240 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-20T12:40:51.240 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/var/lib/ceph/mgr/ceph-x/keyring of=/dev/stdout 2026-03-20T12:40:51.267 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-20T12:40:51.267 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/var/lib/ceph/osd/ceph-4/keyring of=/dev/stdout 2026-03-20T12:40:51.334 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-20T12:40:51.334 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/var/lib/ceph/osd/ceph-5/keyring of=/dev/stdout 2026-03-20T12:40:51.401 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-20T12:40:51.401 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/var/lib/ceph/osd/ceph-6/keyring of=/dev/stdout 2026-03-20T12:40:51.468 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-20T12:40:51.468 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/var/lib/ceph/osd/ceph-7/keyring of=/dev/stdout 2026-03-20T12:40:51.532 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:40:51.532 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.client.0.keyring of=/dev/stdout 2026-03-20T12:40:51.547 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-20T12:40:51.547 DEBUG:teuthology.orchestra.run.vm06:> dd if=/etc/ceph/ceph.client.1.keyring of=/dev/stdout 2026-03-20T12:40:51.589 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-20T12:40:51.589 DEBUG:teuthology.orchestra.run.vm09:> dd if=/etc/ceph/ceph.client.2.keyring of=/dev/stdout 2026-03-20T12:40:51.605 INFO:tasks.ceph:Adding keys to all mons... 2026-03-20T12:40:51.605 DEBUG:teuthology.orchestra.run.vm00:> sudo tee -a /etc/ceph/ceph.keyring 2026-03-20T12:40:51.607 DEBUG:teuthology.orchestra.run.vm06:> sudo tee -a /etc/ceph/ceph.keyring 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout:[mgr.y] 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBJQL1pX6VGKBAAxAoGQt+K6EOf+LiFyi3U0A== 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout:[osd.0] 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBLQL1pBcPIAxAAy/lBBmGPpoyzKYnyv11Ezg== 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout:[osd.1] 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBLQL1p81JnNBAA0RyGD/8+f0F37XqEjo61WA== 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout:[osd.2] 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBMQL1phxoCLhAAVLQHPoaxRENhQxOOHIR6Zg== 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout:[osd.3] 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBNQL1pGlGAKBAAnXH4tbb8w0FCxI3/plUh0g== 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout:[mgr.x] 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBJQL1pfV3JKxAA/f1Ek5tRM/QGfAU4X8FcIQ== 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout:[osd.4] 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBPQL1pgNy0IxAAi09m5l8crrDd29pNwts7Sg== 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout:[osd.5] 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBQQL1pz8TeGBAAEmmDgaAyfbPPmKBJddVJ/w== 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout:[osd.6] 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBRQL1p/2T1FBAAv/TYk6GwmGjAWQJ17bymZg== 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout:[osd.7] 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBSQL1pFL/pDBAAcaIkpoIJ9tQMEX9/Q+uWMQ== 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout:[client.0] 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBJQL1pLmsALhAA37x77LD0TXh4NpddicxNMg== 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout:[client.1] 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBJQL1pvLQaMRAA0diw/8x74sXgiSBPK2e6IA== 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout:[client.2] 2026-03-20T12:40:51.632 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBJQL1pGwGKNBAAcRChPsve5oHqfE4vQJeYSw== 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout:[mgr.y] 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout: key = AQBJQL1pX6VGKBAAxAoGQt+K6EOf+LiFyi3U0A== 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout:[osd.0] 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout: key = AQBLQL1pBcPIAxAAy/lBBmGPpoyzKYnyv11Ezg== 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout:[osd.1] 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout: key = AQBLQL1p81JnNBAA0RyGD/8+f0F37XqEjo61WA== 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout:[osd.2] 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout: key = AQBMQL1phxoCLhAAVLQHPoaxRENhQxOOHIR6Zg== 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout:[osd.3] 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout: key = AQBNQL1pGlGAKBAAnXH4tbb8w0FCxI3/plUh0g== 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout:[mgr.x] 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout: key = AQBJQL1pfV3JKxAA/f1Ek5tRM/QGfAU4X8FcIQ== 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout:[osd.4] 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout: key = AQBPQL1pgNy0IxAAi09m5l8crrDd29pNwts7Sg== 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout:[osd.5] 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout: key = AQBQQL1pz8TeGBAAEmmDgaAyfbPPmKBJddVJ/w== 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout:[osd.6] 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout: key = AQBRQL1p/2T1FBAAv/TYk6GwmGjAWQJ17bymZg== 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout:[osd.7] 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout: key = AQBSQL1pFL/pDBAAcaIkpoIJ9tQMEX9/Q+uWMQ== 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout:[client.0] 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout: key = AQBJQL1pLmsALhAA37x77LD0TXh4NpddicxNMg== 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout:[client.1] 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout: key = AQBJQL1pvLQaMRAA0diw/8x74sXgiSBPK2e6IA== 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout:[client.2] 2026-03-20T12:40:51.654 INFO:teuthology.orchestra.run.vm06.stdout: key = AQBJQL1pGwGKNBAAcRChPsve5oHqfE4vQJeYSw== 2026-03-20T12:40:51.655 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=mgr.y --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 2026-03-20T12:40:51.676 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=mgr.y --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 2026-03-20T12:40:51.743 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.0 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T12:40:51.747 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.0 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T12:40:51.789 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.1 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T12:40:51.830 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.1 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T12:40:51.876 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.2 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T12:40:51.878 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.2 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T12:40:51.929 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.3 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T12:40:51.961 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.3 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T12:40:52.007 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=mgr.x --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 2026-03-20T12:40:52.008 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=mgr.x --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 2026-03-20T12:40:52.051 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.4 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T12:40:52.053 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.4 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T12:40:52.097 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.5 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T12:40:52.098 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.5 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T12:40:52.179 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.6 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T12:40:52.181 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.6 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T12:40:52.224 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.7 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T12:40:52.225 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.7 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T12:40:52.272 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.0 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-03-20T12:40:52.273 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.0 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-03-20T12:40:52.321 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.1 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-03-20T12:40:52.323 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.1 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-03-20T12:40:52.367 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.2 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-03-20T12:40:52.369 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.2 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-03-20T12:40:52.416 INFO:tasks.ceph:Running mkfs on mon nodes... 2026-03-20T12:40:52.416 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/mon/ceph-a 2026-03-20T12:40:52.439 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-mon --cluster ceph --mkfs -i a --monmap /home/ubuntu/cephtest/ceph.monmap --keyring /etc/ceph/ceph.keyring 2026-03-20T12:40:52.530 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/mon/ceph-a 2026-03-20T12:40:52.557 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/mon/ceph-c 2026-03-20T12:40:52.623 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-mon --cluster ceph --mkfs -i c --monmap /home/ubuntu/cephtest/ceph.monmap --keyring /etc/ceph/ceph.keyring 2026-03-20T12:40:52.717 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/mon/ceph-c 2026-03-20T12:40:52.743 DEBUG:teuthology.orchestra.run.vm06:> sudo mkdir -p /var/lib/ceph/mon/ceph-b 2026-03-20T12:40:52.769 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-mon --cluster ceph --mkfs -i b --monmap /home/ubuntu/cephtest/ceph.monmap --keyring /etc/ceph/ceph.keyring 2026-03-20T12:40:52.865 DEBUG:teuthology.orchestra.run.vm06:> sudo chown -R ceph:ceph /var/lib/ceph/mon/ceph-b 2026-03-20T12:40:52.888 DEBUG:teuthology.orchestra.run.vm00:> rm -- /home/ubuntu/cephtest/ceph.monmap 2026-03-20T12:40:52.890 DEBUG:teuthology.orchestra.run.vm06:> rm -- /home/ubuntu/cephtest/ceph.monmap 2026-03-20T12:40:52.945 INFO:tasks.ceph:Starting mon daemons in cluster ceph... 2026-03-20T12:40:52.945 INFO:tasks.ceph.mon.a:Restarting daemon 2026-03-20T12:40:52.945 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i a 2026-03-20T12:40:52.946 INFO:tasks.ceph.mon.a:Started 2026-03-20T12:40:52.946 INFO:tasks.ceph.mon.c:Restarting daemon 2026-03-20T12:40:52.946 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i c 2026-03-20T12:40:52.948 INFO:tasks.ceph.mon.c:Started 2026-03-20T12:40:52.948 INFO:tasks.ceph.mon.b:Restarting daemon 2026-03-20T12:40:52.948 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i b 2026-03-20T12:40:52.988 INFO:tasks.ceph.mon.b:Started 2026-03-20T12:40:52.988 INFO:tasks.ceph:Starting mgr daemons in cluster ceph... 2026-03-20T12:40:52.988 INFO:tasks.ceph.mgr.y:Restarting daemon 2026-03-20T12:40:52.988 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mgr -f --cluster ceph -i y 2026-03-20T12:40:52.989 INFO:tasks.ceph.mgr.y:Started 2026-03-20T12:40:52.989 INFO:tasks.ceph.mgr.x:Restarting daemon 2026-03-20T12:40:52.990 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mgr -f --cluster ceph -i x 2026-03-20T12:40:52.991 INFO:tasks.ceph.mgr.x:Started 2026-03-20T12:40:52.991 DEBUG:tasks.ceph:set 0 configs 2026-03-20T12:40:52.991 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph --cluster ceph config dump 2026-03-20T12:40:53.293 INFO:teuthology.orchestra.run.vm00.stdout:WHO MASK LEVEL OPTION VALUE RO 2026-03-20T12:40:53.303 INFO:tasks.ceph:Setting crush tunables to default 2026-03-20T12:40:53.303 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph --cluster ceph osd crush tunables default 2026-03-20T12:40:53.421 INFO:teuthology.orchestra.run.vm00.stderr:adjusted tunables profile to default 2026-03-20T12:40:53.432 INFO:tasks.ceph:check_enable_crimson: False 2026-03-20T12:40:53.432 INFO:tasks.ceph:Starting osd daemons in cluster ceph... 2026-03-20T12:40:53.432 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:40:53.432 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-0/fsid of=/dev/stdout 2026-03-20T12:40:53.456 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:40:53.456 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-1/fsid of=/dev/stdout 2026-03-20T12:40:53.526 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:40:53.526 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-2/fsid of=/dev/stdout 2026-03-20T12:40:53.591 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:40:53.591 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-3/fsid of=/dev/stdout 2026-03-20T12:40:53.658 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-20T12:40:53.658 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/var/lib/ceph/osd/ceph-4/fsid of=/dev/stdout 2026-03-20T12:40:53.684 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-20T12:40:53.684 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/var/lib/ceph/osd/ceph-5/fsid of=/dev/stdout 2026-03-20T12:40:53.749 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-20T12:40:53.749 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/var/lib/ceph/osd/ceph-6/fsid of=/dev/stdout 2026-03-20T12:40:53.811 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-20T12:40:53.812 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/var/lib/ceph/osd/ceph-7/fsid of=/dev/stdout 2026-03-20T12:40:53.875 DEBUG:teuthology.orchestra.run.vm06:> sudo ceph --cluster ceph osd new d526d31b-fd12-4714-803b-79f5889ef4ba 0 2026-03-20T12:40:54.039 INFO:teuthology.orchestra.run.vm06.stdout:0 2026-03-20T12:40:54.049 DEBUG:teuthology.orchestra.run.vm06:> sudo ceph --cluster ceph osd new d8b77601-f877-4ce2-8f34-578be14bc1a2 1 2026-03-20T12:40:54.058 INFO:tasks.ceph.mgr.x.vm06.stderr:/usr/lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-20T12:40:54.058 INFO:tasks.ceph.mgr.x.vm06.stderr:Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-20T12:40:54.058 INFO:tasks.ceph.mgr.x.vm06.stderr: from numpy import show_config as show_numpy_config 2026-03-20T12:40:54.069 INFO:tasks.ceph.mgr.y.vm00.stderr:/usr/lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-20T12:40:54.069 INFO:tasks.ceph.mgr.y.vm00.stderr:Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-20T12:40:54.069 INFO:tasks.ceph.mgr.y.vm00.stderr: from numpy import show_config as show_numpy_config 2026-03-20T12:40:54.180 INFO:teuthology.orchestra.run.vm06.stdout:1 2026-03-20T12:40:54.190 DEBUG:teuthology.orchestra.run.vm06:> sudo ceph --cluster ceph osd new 4ce3087e-e158-489d-a219-191c29b36a54 2 2026-03-20T12:40:54.347 INFO:teuthology.orchestra.run.vm06.stdout:2 2026-03-20T12:40:54.356 DEBUG:teuthology.orchestra.run.vm06:> sudo ceph --cluster ceph osd new 44ee3fbb-f0f9-4157-9ecf-93ee38c1e226 3 2026-03-20T12:40:54.480 INFO:teuthology.orchestra.run.vm06.stdout:3 2026-03-20T12:40:54.490 DEBUG:teuthology.orchestra.run.vm06:> sudo ceph --cluster ceph osd new 19b44160-7ce5-4aab-8365-8fd1def68987 4 2026-03-20T12:40:54.610 INFO:teuthology.orchestra.run.vm06.stdout:4 2026-03-20T12:40:54.619 DEBUG:teuthology.orchestra.run.vm06:> sudo ceph --cluster ceph osd new 4ca07e3d-e23b-4818-9b37-467e81c522a4 5 2026-03-20T12:40:54.735 INFO:teuthology.orchestra.run.vm06.stdout:5 2026-03-20T12:40:54.746 DEBUG:teuthology.orchestra.run.vm06:> sudo ceph --cluster ceph osd new fe447377-8766-43d5-8dea-7d378e56c784 6 2026-03-20T12:40:54.861 INFO:teuthology.orchestra.run.vm06.stdout:6 2026-03-20T12:40:54.870 DEBUG:teuthology.orchestra.run.vm06:> sudo ceph --cluster ceph osd new 347eae2d-dfdd-44d0-8165-f778e8d870e4 7 2026-03-20T12:40:54.988 INFO:teuthology.orchestra.run.vm06.stdout:7 2026-03-20T12:40:54.999 INFO:tasks.ceph.osd.0:Restarting daemon 2026-03-20T12:40:54.999 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 0 2026-03-20T12:40:55.001 INFO:tasks.ceph.osd.0:Started 2026-03-20T12:40:55.001 INFO:tasks.ceph.osd.1:Restarting daemon 2026-03-20T12:40:55.001 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1 2026-03-20T12:40:55.002 INFO:tasks.ceph.osd.1:Started 2026-03-20T12:40:55.002 INFO:tasks.ceph.osd.2:Restarting daemon 2026-03-20T12:40:55.002 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 2 2026-03-20T12:40:55.005 INFO:tasks.ceph.osd.2:Started 2026-03-20T12:40:55.005 INFO:tasks.ceph.osd.3:Restarting daemon 2026-03-20T12:40:55.005 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 3 2026-03-20T12:40:55.009 INFO:tasks.ceph.osd.3:Started 2026-03-20T12:40:55.009 INFO:tasks.ceph.osd.4:Restarting daemon 2026-03-20T12:40:55.009 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 4 2026-03-20T12:40:55.011 INFO:tasks.ceph.osd.4:Started 2026-03-20T12:40:55.011 INFO:tasks.ceph.osd.5:Restarting daemon 2026-03-20T12:40:55.011 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 5 2026-03-20T12:40:55.013 INFO:tasks.ceph.osd.5:Started 2026-03-20T12:40:55.013 INFO:tasks.ceph.osd.6:Restarting daemon 2026-03-20T12:40:55.013 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 6 2026-03-20T12:40:55.016 INFO:tasks.ceph.osd.6:Started 2026-03-20T12:40:55.016 INFO:tasks.ceph.osd.7:Restarting daemon 2026-03-20T12:40:55.017 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 7 2026-03-20T12:40:55.019 INFO:tasks.ceph.osd.7:Started 2026-03-20T12:40:55.019 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-03-20T12:40:55.171 INFO:tasks.ceph.osd.6.vm06.stderr:2026-03-20T12:40:55.171+0000 7fb609b54900 -1 Falling back to public interface 2026-03-20T12:40:55.181 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:40:55.181 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":10,"fsid":"8a1e3aca-ae1e-437d-a30d-aacd48456e6d","created":"2026-03-20T12:40:53.234765+0000","modified":"2026-03-20T12:40:54.985569+0000","last_up_change":"0.000000","last_in_change":"2026-03-20T12:40:54.985569+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":2,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"d526d31b-fd12-4714-803b-79f5889ef4ba","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":1,"uuid":"d8b77601-f877-4ce2-8f34-578be14bc1a2","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":2,"uuid":"4ce3087e-e158-489d-a219-191c29b36a54","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":3,"uuid":"44ee3fbb-f0f9-4157-9ecf-93ee38c1e226","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":4,"uuid":"19b44160-7ce5-4aab-8365-8fd1def68987","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":5,"uuid":"4ca07e3d-e23b-4818-9b37-467e81c522a4","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":6,"uuid":"fe447377-8766-43d5-8dea-7d378e56c784","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":7,"uuid":"347eae2d-dfdd-44d0-8165-f778e8d870e4","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-20T12:40:55.181 INFO:tasks.ceph.osd.4.vm06.stderr:2026-03-20T12:40:55.180+0000 7f17d5219900 -1 Falling back to public interface 2026-03-20T12:40:55.182 INFO:tasks.ceph.osd.0.vm00.stderr:2026-03-20T12:40:55.182+0000 7fd515c72900 -1 Falling back to public interface 2026-03-20T12:40:55.192 INFO:tasks.ceph.ceph_manager.ceph:[] 2026-03-20T12:40:55.192 INFO:tasks.ceph:Waiting for OSDs to come up 2026-03-20T12:40:55.198 INFO:tasks.ceph.osd.2.vm00.stderr:2026-03-20T12:40:55.198+0000 7fdce2ca8900 -1 Falling back to public interface 2026-03-20T12:40:55.199 INFO:tasks.ceph.osd.3.vm00.stderr:2026-03-20T12:40:55.199+0000 7f1b4d5f4900 -1 Falling back to public interface 2026-03-20T12:40:55.206 INFO:tasks.ceph.osd.5.vm06.stderr:2026-03-20T12:40:55.205+0000 7f36c0213900 -1 Falling back to public interface 2026-03-20T12:40:55.209 INFO:tasks.ceph.osd.7.vm06.stderr:2026-03-20T12:40:55.209+0000 7f9eb320e900 -1 Falling back to public interface 2026-03-20T12:40:55.212 INFO:tasks.ceph.osd.1.vm00.stderr:2026-03-20T12:40:55.212+0000 7f2bc4ef0900 -1 Falling back to public interface 2026-03-20T12:40:55.601 INFO:tasks.ceph.osd.6.vm06.stderr:2026-03-20T12:40:55.601+0000 7fb609b54900 -1 osd.6 0 log_to_monitors true 2026-03-20T12:40:55.616 INFO:tasks.ceph.osd.2.vm00.stderr:2026-03-20T12:40:55.616+0000 7fdce2ca8900 -1 osd.2 0 log_to_monitors true 2026-03-20T12:40:55.627 INFO:tasks.ceph.osd.3.vm00.stderr:2026-03-20T12:40:55.627+0000 7f1b4d5f4900 -1 osd.3 0 log_to_monitors true 2026-03-20T12:40:55.646 INFO:tasks.ceph.osd.5.vm06.stderr:2026-03-20T12:40:55.646+0000 7f36c0213900 -1 osd.5 0 log_to_monitors true 2026-03-20T12:40:55.650 INFO:tasks.ceph.osd.0.vm00.stderr:2026-03-20T12:40:55.650+0000 7fd515c72900 -1 osd.0 0 log_to_monitors true 2026-03-20T12:40:55.659 INFO:tasks.ceph.osd.4.vm06.stderr:2026-03-20T12:40:55.659+0000 7f17d5219900 -1 osd.4 0 log_to_monitors true 2026-03-20T12:40:55.704 INFO:tasks.ceph.osd.1.vm00.stderr:2026-03-20T12:40:55.703+0000 7f2bc4ef0900 -1 osd.1 0 log_to_monitors true 2026-03-20T12:40:55.708 INFO:tasks.ceph.osd.7.vm06.stderr:2026-03-20T12:40:55.708+0000 7f9eb320e900 -1 osd.7 0 log_to_monitors true 2026-03-20T12:40:55.996 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json 2026-03-20T12:40:56.097 INFO:teuthology.misc.health.vm00.stdout: 2026-03-20T12:40:56.097 INFO:teuthology.misc.health.vm00.stdout:{"epoch":10,"fsid":"8a1e3aca-ae1e-437d-a30d-aacd48456e6d","created":"2026-03-20T12:40:53.234765+0000","modified":"2026-03-20T12:40:54.985569+0000","last_up_change":"0.000000","last_in_change":"2026-03-20T12:40:54.985569+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":2,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"d526d31b-fd12-4714-803b-79f5889ef4ba","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":1,"uuid":"d8b77601-f877-4ce2-8f34-578be14bc1a2","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":2,"uuid":"4ce3087e-e158-489d-a219-191c29b36a54","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":3,"uuid":"44ee3fbb-f0f9-4157-9ecf-93ee38c1e226","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":4,"uuid":"19b44160-7ce5-4aab-8365-8fd1def68987","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":5,"uuid":"4ca07e3d-e23b-4818-9b37-467e81c522a4","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":6,"uuid":"fe447377-8766-43d5-8dea-7d378e56c784","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":7,"uuid":"347eae2d-dfdd-44d0-8165-f778e8d870e4","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-20T12:40:56.105 DEBUG:teuthology.misc:0 of 8 OSDs are up 2026-03-20T12:40:56.687 INFO:tasks.ceph.mgr.x.vm06.stderr:2026-03-20T12:40:56.688+0000 7f6ef2dad640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-20T12:40:56.688 INFO:tasks.ceph.mgr.x.vm06.stderr:2026-03-20T12:40:56.689+0000 7f6ef2dad640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-20T12:40:57.263 INFO:tasks.ceph.osd.0.vm00.stderr:2026-03-20T12:40:57.262+0000 7fd5113e0640 -1 osd.0 0 waiting for initial osdmap 2026-03-20T12:40:57.263 INFO:tasks.ceph.osd.3.vm00.stderr:2026-03-20T12:40:57.263+0000 7f1b48d62640 -1 osd.3 0 waiting for initial osdmap 2026-03-20T12:40:57.267 INFO:tasks.ceph.osd.2.vm00.stderr:2026-03-20T12:40:57.267+0000 7fdcde414640 -1 osd.2 0 waiting for initial osdmap 2026-03-20T12:40:57.267 INFO:tasks.ceph.osd.1.vm00.stderr:2026-03-20T12:40:57.267+0000 7f2bc1693640 -1 osd.1 0 waiting for initial osdmap 2026-03-20T12:40:57.269 INFO:tasks.ceph.osd.5.vm06.stderr:2026-03-20T12:40:57.270+0000 7f36bb079640 -1 osd.5 0 waiting for initial osdmap 2026-03-20T12:40:57.269 INFO:tasks.ceph.osd.6.vm06.stderr:2026-03-20T12:40:57.270+0000 7fb6052c0640 -1 osd.6 0 waiting for initial osdmap 2026-03-20T12:40:57.269 INFO:tasks.ceph.osd.4.vm06.stderr:2026-03-20T12:40:57.270+0000 7f17d0180640 -1 osd.4 0 waiting for initial osdmap 2026-03-20T12:40:57.269 INFO:tasks.ceph.osd.7.vm06.stderr:2026-03-20T12:40:57.270+0000 7f9eae179640 -1 osd.7 0 waiting for initial osdmap 2026-03-20T12:40:57.273 INFO:tasks.ceph.osd.5.vm06.stderr:2026-03-20T12:40:57.273+0000 7f36b6690640 -1 osd.5 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-20T12:40:57.274 INFO:tasks.ceph.osd.1.vm00.stderr:2026-03-20T12:40:57.274+0000 7f2bbbc75640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-20T12:40:57.274 INFO:tasks.ceph.osd.4.vm06.stderr:2026-03-20T12:40:57.274+0000 7f17cb797640 -1 osd.4 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-20T12:40:57.274 INFO:tasks.ceph.osd.6.vm06.stderr:2026-03-20T12:40:57.274+0000 7fb6008d7640 -1 osd.6 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-20T12:40:57.275 INFO:tasks.ceph.osd.2.vm00.stderr:2026-03-20T12:40:57.275+0000 7fdcd9a2b640 -1 osd.2 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-20T12:40:57.275 INFO:tasks.ceph.osd.3.vm00.stderr:2026-03-20T12:40:57.275+0000 7f1b44379640 -1 osd.3 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-20T12:40:57.275 INFO:tasks.ceph.osd.7.vm06.stderr:2026-03-20T12:40:57.275+0000 7f9ea9790640 -1 osd.7 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-20T12:40:57.275 INFO:tasks.ceph.osd.0.vm00.stderr:2026-03-20T12:40:57.275+0000 7fd50c9f7640 -1 osd.0 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-20T12:41:02.910 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json 2026-03-20T12:41:03.091 INFO:teuthology.misc.health.vm00.stdout: 2026-03-20T12:41:03.091 INFO:teuthology.misc.health.vm00.stdout:{"epoch":17,"fsid":"8a1e3aca-ae1e-437d-a30d-aacd48456e6d","created":"2026-03-20T12:40:53.234765+0000","modified":"2026-03-20T12:41:02.296965+0000","last_up_change":"2026-03-20T12:40:58.268164+0000","last_in_change":"2026-03-20T12:40:54.985569+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-20T12:40:59.698247+0000","flags":1,"flags_names":"hashpspool","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"17","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":8,"score_stable":8,"optimal_score":0.25,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"d526d31b-fd12-4714-803b-79f5889ef4ba","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6800","nonce":3387304204},{"type":"v1","addr":"192.168.123.100:6801","nonce":3387304204}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":3387304204},{"type":"v1","addr":"192.168.123.100:6803","nonce":3387304204}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":3387304204},{"type":"v1","addr":"192.168.123.100:6807","nonce":3387304204}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":3387304204},{"type":"v1","addr":"192.168.123.100:6805","nonce":3387304204}]},"public_addr":"192.168.123.100:6801/3387304204","cluster_addr":"192.168.123.100:6803/3387304204","heartbeat_back_addr":"192.168.123.100:6807/3387304204","heartbeat_front_addr":"192.168.123.100:6805/3387304204","state":["exists","up"]},{"osd":1,"uuid":"d8b77601-f877-4ce2-8f34-578be14bc1a2","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":3638630257},{"type":"v1","addr":"192.168.123.100:6825","nonce":3638630257}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":3638630257},{"type":"v1","addr":"192.168.123.100:6827","nonce":3638630257}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":3638630257},{"type":"v1","addr":"192.168.123.100:6831","nonce":3638630257}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":3638630257},{"type":"v1","addr":"192.168.123.100:6829","nonce":3638630257}]},"public_addr":"192.168.123.100:6825/3638630257","cluster_addr":"192.168.123.100:6827/3638630257","heartbeat_back_addr":"192.168.123.100:6831/3638630257","heartbeat_front_addr":"192.168.123.100:6829/3638630257","state":["exists","up"]},{"osd":2,"uuid":"4ce3087e-e158-489d-a219-191c29b36a54","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":3910428362},{"type":"v1","addr":"192.168.123.100:6809","nonce":3910428362}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":3910428362},{"type":"v1","addr":"192.168.123.100:6811","nonce":3910428362}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":3910428362},{"type":"v1","addr":"192.168.123.100:6816","nonce":3910428362}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":3910428362},{"type":"v1","addr":"192.168.123.100:6813","nonce":3910428362}]},"public_addr":"192.168.123.100:6809/3910428362","cluster_addr":"192.168.123.100:6811/3910428362","heartbeat_back_addr":"192.168.123.100:6816/3910428362","heartbeat_front_addr":"192.168.123.100:6813/3910428362","state":["exists","up"]},{"osd":3,"uuid":"44ee3fbb-f0f9-4157-9ecf-93ee38c1e226","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6815","nonce":872239044},{"type":"v1","addr":"192.168.123.100:6817","nonce":872239044}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":872239044},{"type":"v1","addr":"192.168.123.100:6819","nonce":872239044}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":872239044},{"type":"v1","addr":"192.168.123.100:6823","nonce":872239044}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":872239044},{"type":"v1","addr":"192.168.123.100:6821","nonce":872239044}]},"public_addr":"192.168.123.100:6817/872239044","cluster_addr":"192.168.123.100:6819/872239044","heartbeat_back_addr":"192.168.123.100:6823/872239044","heartbeat_front_addr":"192.168.123.100:6821/872239044","state":["exists","up"]},{"osd":4,"uuid":"19b44160-7ce5-4aab-8365-8fd1def68987","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6808","nonce":1114859926},{"type":"v1","addr":"192.168.123.106:6809","nonce":1114859926}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6810","nonce":1114859926},{"type":"v1","addr":"192.168.123.106:6811","nonce":1114859926}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6814","nonce":1114859926},{"type":"v1","addr":"192.168.123.106:6815","nonce":1114859926}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6812","nonce":1114859926},{"type":"v1","addr":"192.168.123.106:6813","nonce":1114859926}]},"public_addr":"192.168.123.106:6809/1114859926","cluster_addr":"192.168.123.106:6811/1114859926","heartbeat_back_addr":"192.168.123.106:6815/1114859926","heartbeat_front_addr":"192.168.123.106:6813/1114859926","state":["exists","up"]},{"osd":5,"uuid":"4ca07e3d-e23b-4818-9b37-467e81c522a4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6816","nonce":2245730692},{"type":"v1","addr":"192.168.123.106:6817","nonce":2245730692}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6818","nonce":2245730692},{"type":"v1","addr":"192.168.123.106:6819","nonce":2245730692}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6822","nonce":2245730692},{"type":"v1","addr":"192.168.123.106:6823","nonce":2245730692}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6820","nonce":2245730692},{"type":"v1","addr":"192.168.123.106:6821","nonce":2245730692}]},"public_addr":"192.168.123.106:6817/2245730692","cluster_addr":"192.168.123.106:6819/2245730692","heartbeat_back_addr":"192.168.123.106:6823/2245730692","heartbeat_front_addr":"192.168.123.106:6821/2245730692","state":["exists","up"]},{"osd":6,"uuid":"fe447377-8766-43d5-8dea-7d378e56c784","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6800","nonce":3650942598},{"type":"v1","addr":"192.168.123.106:6801","nonce":3650942598}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6802","nonce":3650942598},{"type":"v1","addr":"192.168.123.106:6803","nonce":3650942598}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6806","nonce":3650942598},{"type":"v1","addr":"192.168.123.106:6807","nonce":3650942598}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6804","nonce":3650942598},{"type":"v1","addr":"192.168.123.106:6805","nonce":3650942598}]},"public_addr":"192.168.123.106:6801/3650942598","cluster_addr":"192.168.123.106:6803/3650942598","heartbeat_back_addr":"192.168.123.106:6807/3650942598","heartbeat_front_addr":"192.168.123.106:6805/3650942598","state":["exists","up"]},{"osd":7,"uuid":"347eae2d-dfdd-44d0-8165-f778e8d870e4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":15,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6824","nonce":894001917},{"type":"v1","addr":"192.168.123.106:6825","nonce":894001917}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6826","nonce":894001917},{"type":"v1","addr":"192.168.123.106:6827","nonce":894001917}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6830","nonce":894001917},{"type":"v1","addr":"192.168.123.106:6831","nonce":894001917}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6828","nonce":894001917},{"type":"v1","addr":"192.168.123.106:6829","nonce":894001917}]},"public_addr":"192.168.123.106:6825/894001917","cluster_addr":"192.168.123.106:6827/894001917","heartbeat_back_addr":"192.168.123.106:6831/894001917","heartbeat_front_addr":"192.168.123.106:6829/894001917","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T12:40:56.697455+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T12:40:56.662947+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T12:40:56.592261+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T12:40:56.745964+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-20T12:41:03.100 DEBUG:teuthology.misc:8 of 8 OSDs are up 2026-03-20T12:41:03.100 INFO:tasks.ceph:Creating RBD pool 2026-03-20T12:41:03.100 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph --cluster ceph osd pool create rbd 8 2026-03-20T12:41:03.315 INFO:teuthology.orchestra.run.vm00.stderr:pool 'rbd' created 2026-03-20T12:41:03.326 DEBUG:teuthology.orchestra.run.vm00:> rbd --cluster ceph pool init rbd 2026-03-20T12:41:03.355 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-03-20T12:41:03.355 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T12:41:06.337 INFO:tasks.ceph:Starting mds daemons in cluster ceph... 2026-03-20T12:41:06.337 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph config log 1 --format=json 2026-03-20T12:41:06.337 INFO:tasks.daemonwatchdog.daemon_watchdog:watchdog starting 2026-03-20T12:41:06.584 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:41:06.595 INFO:teuthology.orchestra.run.vm00.stdout:[{"version":1,"timestamp":"0.000000","name":"","changes":[]}] 2026-03-20T12:41:06.596 INFO:tasks.ceph_manager:config epoch is 1 2026-03-20T12:41:06.596 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-20T12:41:06.596 INFO:tasks.ceph.ceph_manager.ceph:waiting for mgr available 2026-03-20T12:41:06.596 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph mgr dump --format=json 2026-03-20T12:41:06.832 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:41:06.844 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":5,"flags":0,"active_gid":4102,"active_name":"x","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6832","nonce":2869882786},{"type":"v1","addr":"192.168.123.106:6833","nonce":2869882786}]},"active_addr":"192.168.123.106:6833/2869882786","active_change":"2026-03-20T12:40:55.671033+0000","active_mgr_features":4544132024016699391,"available":true,"standbys":[{"gid":4103,"name":"y","mgr_features":4544132024016699391,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to, use commas to separate multiple","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"certificate_automated_rotation_enabled":{"name":"certificate_automated_rotation_enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"This flag controls whether cephadm automatically rotates certificates upon expiration.","long_desc":"","tags":[],"see_also":[]},"certificate_check_debug_mode":{"name":"certificate_check_debug_mode","type":"bool","level":"dev","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"FOR TESTING ONLY: This flag forces the certificate check instead of waiting for certificate_check_period.","long_desc":"","tags":[],"see_also":[]},"certificate_check_period":{"name":"certificate_check_period","type":"int","level":"advanced","flags":0,"default_value":"1","min":"0","max":"30","enum_allowed":[],"desc":"Specifies how often (in days) the certificate should be checked for validity.","long_desc":"","tags":[],"see_also":[]},"certificate_duration_days":{"name":"certificate_duration_days","type":"int","level":"advanced","flags":0,"default_value":"1095","min":"90","max":"3650","enum_allowed":[],"desc":"Specifies the duration of self certificates generated and signed by cephadm root CA","long_desc":"","tags":[],"see_also":[]},"certificate_renewal_threshold_days":{"name":"certificate_renewal_threshold_days","type":"int","level":"advanced","flags":0,"default_value":"30","min":"10","max":"90","enum_allowed":[],"desc":"Specifies the lead time in days to initiate certificate renewal before expiration.","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.28.1","min":"","max":"","enum_allowed":[],"desc":"Alertmanager container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"Elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:12.3.1","min":"","max":"","enum_allowed":[],"desc":"Grafana container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"Haproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_nginx":{"name":"container_image_nginx","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nginx:sclorg-nginx-126","min":"","max":"","enum_allowed":[],"desc":"Nginx container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.9.1","min":"","max":"","enum_allowed":[],"desc":"Node exporter container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.5","min":"","max":"","enum_allowed":[],"desc":"Nvmeof container image","long_desc":"","tags":[],"see_also":[]},"container_image_oauth2_proxy":{"name":"container_image_oauth2_proxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/oauth2-proxy/oauth2-proxy:v7.6.0","min":"","max":"","enum_allowed":[],"desc":"Oauth2 proxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v3.6.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba_metrics":{"name":"container_image_samba_metrics","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-metrics:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba metrics container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"Snmp gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"stray_daemon_check_interval":{"name":"stray_daemon_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"how frequently cephadm should check for the presence of stray daemons","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MANAGED_BY_CLUSTERS":{"name":"MANAGED_BY_CLUSTERS","type":"str","level":"advanced","flags":0,"default_value":"[]","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MULTICLUSTER_CONFIG":{"name":"MULTICLUSTER_CONFIG","type":"str","level":"advanced","flags":0,"default_value":"{}","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROM_ALERT_CREDENTIAL_CACHE_TTL":{"name":"PROM_ALERT_CREDENTIAL_CACHE_TTL","type":"int","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_HOSTNAME_PER_DAEMON":{"name":"RGW_HOSTNAME_PER_DAEMON","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"UNSAFE_TLS_v1_2":{"name":"UNSAFE_TLS_v1_2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crypto_caller":{"name":"crypto_caller","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sso_oauth2":{"name":"sso_oauth2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"prometheus_tls_secret_name":{"name":"prometheus_tls_secret_name","type":"str","level":"advanced","flags":0,"default_value":"rook-ceph-prometheus-server-tls","min":"","max":"","enum_allowed":[],"desc":"name of tls secret in k8s for prometheus","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"smb","can_run":true,"error_string":"","module_options":{"internal_store_backend":{"name":"internal_store_backend","type":"str","level":"dev","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"set internal store backend. for develoment and testing only","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_orchestration":{"name":"update_orchestration","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically update orchestration when smb resources are changed","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_cloning":{"name":"pause_cloning","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_purging":{"name":"pause_purging","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous subvolume purge threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["iostat","nfs"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to, use commas to separate multiple","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"certificate_automated_rotation_enabled":{"name":"certificate_automated_rotation_enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"This flag controls whether cephadm automatically rotates certificates upon expiration.","long_desc":"","tags":[],"see_also":[]},"certificate_check_debug_mode":{"name":"certificate_check_debug_mode","type":"bool","level":"dev","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"FOR TESTING ONLY: This flag forces the certificate check instead of waiting for certificate_check_period.","long_desc":"","tags":[],"see_also":[]},"certificate_check_period":{"name":"certificate_check_period","type":"int","level":"advanced","flags":0,"default_value":"1","min":"0","max":"30","enum_allowed":[],"desc":"Specifies how often (in days) the certificate should be checked for validity.","long_desc":"","tags":[],"see_also":[]},"certificate_duration_days":{"name":"certificate_duration_days","type":"int","level":"advanced","flags":0,"default_value":"1095","min":"90","max":"3650","enum_allowed":[],"desc":"Specifies the duration of self certificates generated and signed by cephadm root CA","long_desc":"","tags":[],"see_also":[]},"certificate_renewal_threshold_days":{"name":"certificate_renewal_threshold_days","type":"int","level":"advanced","flags":0,"default_value":"30","min":"10","max":"90","enum_allowed":[],"desc":"Specifies the lead time in days to initiate certificate renewal before expiration.","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.28.1","min":"","max":"","enum_allowed":[],"desc":"Alertmanager container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"Elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:12.3.1","min":"","max":"","enum_allowed":[],"desc":"Grafana container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"Haproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_nginx":{"name":"container_image_nginx","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nginx:sclorg-nginx-126","min":"","max":"","enum_allowed":[],"desc":"Nginx container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.9.1","min":"","max":"","enum_allowed":[],"desc":"Node exporter container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.5","min":"","max":"","enum_allowed":[],"desc":"Nvmeof container image","long_desc":"","tags":[],"see_also":[]},"container_image_oauth2_proxy":{"name":"container_image_oauth2_proxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/oauth2-proxy/oauth2-proxy:v7.6.0","min":"","max":"","enum_allowed":[],"desc":"Oauth2 proxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v3.6.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba_metrics":{"name":"container_image_samba_metrics","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-metrics:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba metrics container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"Snmp gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"stray_daemon_check_interval":{"name":"stray_daemon_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"how frequently cephadm should check for the presence of stray daemons","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MANAGED_BY_CLUSTERS":{"name":"MANAGED_BY_CLUSTERS","type":"str","level":"advanced","flags":0,"default_value":"[]","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MULTICLUSTER_CONFIG":{"name":"MULTICLUSTER_CONFIG","type":"str","level":"advanced","flags":0,"default_value":"{}","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROM_ALERT_CREDENTIAL_CACHE_TTL":{"name":"PROM_ALERT_CREDENTIAL_CACHE_TTL","type":"int","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_HOSTNAME_PER_DAEMON":{"name":"RGW_HOSTNAME_PER_DAEMON","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"UNSAFE_TLS_v1_2":{"name":"UNSAFE_TLS_v1_2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crypto_caller":{"name":"crypto_caller","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sso_oauth2":{"name":"sso_oauth2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"prometheus_tls_secret_name":{"name":"prometheus_tls_secret_name","type":"str","level":"advanced","flags":0,"default_value":"rook-ceph-prometheus-server-tls","min":"","max":"","enum_allowed":[],"desc":"name of tls secret in k8s for prometheus","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"smb","can_run":true,"error_string":"","module_options":{"internal_store_backend":{"name":"internal_store_backend","type":"str","level":"dev","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"set internal store backend. for develoment and testing only","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_orchestration":{"name":"update_orchestration","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically update orchestration when smb resources are changed","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_cloning":{"name":"pause_cloning","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_purging":{"name":"pause_purging","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous subvolume purge threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"tentacle":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":0,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.106:0","nonce":3922962607}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.106:0","nonce":1064379881}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.106:0","nonce":3794072856}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.106:0","nonce":3314463105}]}]} 2026-03-20T12:41:06.845 INFO:tasks.ceph.ceph_manager.ceph:mgr available! 2026-03-20T12:41:06.845 INFO:tasks.ceph.ceph_manager.ceph:waiting for all up 2026-03-20T12:41:06.845 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-03-20T12:41:07.036 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:41:07.036 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":21,"fsid":"8a1e3aca-ae1e-437d-a30d-aacd48456e6d","created":"2026-03-20T12:40:53.234765+0000","modified":"2026-03-20T12:41:06.323641+0000","last_up_change":"2026-03-20T12:40:58.268164+0000","last_in_change":"2026-03-20T12:40:54.985569+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":2,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-20T12:40:59.698247+0000","flags":1,"flags_names":"hashpspool","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"17","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":8,"score_stable":8,"optimal_score":0.25,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"rbd","create_time":"2026-03-20T12:41:03.292265+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":8,"pg_placement_num":8,"pg_placement_num_target":8,"pg_num_target":8,"pg_num_pending":8,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":21,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.9900000095367432,"score_stable":1.9900000095367432,"optimal_score":0.87999999523162842,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"d526d31b-fd12-4714-803b-79f5889ef4ba","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6800","nonce":3387304204},{"type":"v1","addr":"192.168.123.100:6801","nonce":3387304204}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":3387304204},{"type":"v1","addr":"192.168.123.100:6803","nonce":3387304204}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":3387304204},{"type":"v1","addr":"192.168.123.100:6807","nonce":3387304204}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":3387304204},{"type":"v1","addr":"192.168.123.100:6805","nonce":3387304204}]},"public_addr":"192.168.123.100:6801/3387304204","cluster_addr":"192.168.123.100:6803/3387304204","heartbeat_back_addr":"192.168.123.100:6807/3387304204","heartbeat_front_addr":"192.168.123.100:6805/3387304204","state":["exists","up"]},{"osd":1,"uuid":"d8b77601-f877-4ce2-8f34-578be14bc1a2","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":3638630257},{"type":"v1","addr":"192.168.123.100:6825","nonce":3638630257}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":3638630257},{"type":"v1","addr":"192.168.123.100:6827","nonce":3638630257}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":3638630257},{"type":"v1","addr":"192.168.123.100:6831","nonce":3638630257}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":3638630257},{"type":"v1","addr":"192.168.123.100:6829","nonce":3638630257}]},"public_addr":"192.168.123.100:6825/3638630257","cluster_addr":"192.168.123.100:6827/3638630257","heartbeat_back_addr":"192.168.123.100:6831/3638630257","heartbeat_front_addr":"192.168.123.100:6829/3638630257","state":["exists","up"]},{"osd":2,"uuid":"4ce3087e-e158-489d-a219-191c29b36a54","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":3910428362},{"type":"v1","addr":"192.168.123.100:6809","nonce":3910428362}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":3910428362},{"type":"v1","addr":"192.168.123.100:6811","nonce":3910428362}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":3910428362},{"type":"v1","addr":"192.168.123.100:6816","nonce":3910428362}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":3910428362},{"type":"v1","addr":"192.168.123.100:6813","nonce":3910428362}]},"public_addr":"192.168.123.100:6809/3910428362","cluster_addr":"192.168.123.100:6811/3910428362","heartbeat_back_addr":"192.168.123.100:6816/3910428362","heartbeat_front_addr":"192.168.123.100:6813/3910428362","state":["exists","up"]},{"osd":3,"uuid":"44ee3fbb-f0f9-4157-9ecf-93ee38c1e226","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6815","nonce":872239044},{"type":"v1","addr":"192.168.123.100:6817","nonce":872239044}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":872239044},{"type":"v1","addr":"192.168.123.100:6819","nonce":872239044}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":872239044},{"type":"v1","addr":"192.168.123.100:6823","nonce":872239044}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":872239044},{"type":"v1","addr":"192.168.123.100:6821","nonce":872239044}]},"public_addr":"192.168.123.100:6817/872239044","cluster_addr":"192.168.123.100:6819/872239044","heartbeat_back_addr":"192.168.123.100:6823/872239044","heartbeat_front_addr":"192.168.123.100:6821/872239044","state":["exists","up"]},{"osd":4,"uuid":"19b44160-7ce5-4aab-8365-8fd1def68987","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6808","nonce":1114859926},{"type":"v1","addr":"192.168.123.106:6809","nonce":1114859926}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6810","nonce":1114859926},{"type":"v1","addr":"192.168.123.106:6811","nonce":1114859926}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6814","nonce":1114859926},{"type":"v1","addr":"192.168.123.106:6815","nonce":1114859926}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6812","nonce":1114859926},{"type":"v1","addr":"192.168.123.106:6813","nonce":1114859926}]},"public_addr":"192.168.123.106:6809/1114859926","cluster_addr":"192.168.123.106:6811/1114859926","heartbeat_back_addr":"192.168.123.106:6815/1114859926","heartbeat_front_addr":"192.168.123.106:6813/1114859926","state":["exists","up"]},{"osd":5,"uuid":"4ca07e3d-e23b-4818-9b37-467e81c522a4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6816","nonce":2245730692},{"type":"v1","addr":"192.168.123.106:6817","nonce":2245730692}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6818","nonce":2245730692},{"type":"v1","addr":"192.168.123.106:6819","nonce":2245730692}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6822","nonce":2245730692},{"type":"v1","addr":"192.168.123.106:6823","nonce":2245730692}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6820","nonce":2245730692},{"type":"v1","addr":"192.168.123.106:6821","nonce":2245730692}]},"public_addr":"192.168.123.106:6817/2245730692","cluster_addr":"192.168.123.106:6819/2245730692","heartbeat_back_addr":"192.168.123.106:6823/2245730692","heartbeat_front_addr":"192.168.123.106:6821/2245730692","state":["exists","up"]},{"osd":6,"uuid":"fe447377-8766-43d5-8dea-7d378e56c784","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6800","nonce":3650942598},{"type":"v1","addr":"192.168.123.106:6801","nonce":3650942598}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6802","nonce":3650942598},{"type":"v1","addr":"192.168.123.106:6803","nonce":3650942598}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6806","nonce":3650942598},{"type":"v1","addr":"192.168.123.106:6807","nonce":3650942598}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6804","nonce":3650942598},{"type":"v1","addr":"192.168.123.106:6805","nonce":3650942598}]},"public_addr":"192.168.123.106:6801/3650942598","cluster_addr":"192.168.123.106:6803/3650942598","heartbeat_back_addr":"192.168.123.106:6807/3650942598","heartbeat_front_addr":"192.168.123.106:6805/3650942598","state":["exists","up"]},{"osd":7,"uuid":"347eae2d-dfdd-44d0-8165-f778e8d870e4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6824","nonce":894001917},{"type":"v1","addr":"192.168.123.106:6825","nonce":894001917}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6826","nonce":894001917},{"type":"v1","addr":"192.168.123.106:6827","nonce":894001917}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6830","nonce":894001917},{"type":"v1","addr":"192.168.123.106:6831","nonce":894001917}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6828","nonce":894001917},{"type":"v1","addr":"192.168.123.106:6829","nonce":894001917}]},"public_addr":"192.168.123.106:6825/894001917","cluster_addr":"192.168.123.106:6827/894001917","heartbeat_back_addr":"192.168.123.106:6831/894001917","heartbeat_front_addr":"192.168.123.106:6829/894001917","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T12:40:56.697455+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T12:40:56.662947+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T12:40:56.592261+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T12:40:56.745964+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-20T12:41:07.047 INFO:tasks.ceph.ceph_manager.ceph:all up! 2026-03-20T12:41:07.047 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-03-20T12:41:07.233 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:41:07.234 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":21,"fsid":"8a1e3aca-ae1e-437d-a30d-aacd48456e6d","created":"2026-03-20T12:40:53.234765+0000","modified":"2026-03-20T12:41:06.323641+0000","last_up_change":"2026-03-20T12:40:58.268164+0000","last_in_change":"2026-03-20T12:40:54.985569+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":2,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-20T12:40:59.698247+0000","flags":1,"flags_names":"hashpspool","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"17","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":8,"score_stable":8,"optimal_score":0.25,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"rbd","create_time":"2026-03-20T12:41:03.292265+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":8,"pg_placement_num":8,"pg_placement_num_target":8,"pg_num_target":8,"pg_num_pending":8,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":21,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.9900000095367432,"score_stable":1.9900000095367432,"optimal_score":0.87999999523162842,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"d526d31b-fd12-4714-803b-79f5889ef4ba","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6800","nonce":3387304204},{"type":"v1","addr":"192.168.123.100:6801","nonce":3387304204}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":3387304204},{"type":"v1","addr":"192.168.123.100:6803","nonce":3387304204}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":3387304204},{"type":"v1","addr":"192.168.123.100:6807","nonce":3387304204}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":3387304204},{"type":"v1","addr":"192.168.123.100:6805","nonce":3387304204}]},"public_addr":"192.168.123.100:6801/3387304204","cluster_addr":"192.168.123.100:6803/3387304204","heartbeat_back_addr":"192.168.123.100:6807/3387304204","heartbeat_front_addr":"192.168.123.100:6805/3387304204","state":["exists","up"]},{"osd":1,"uuid":"d8b77601-f877-4ce2-8f34-578be14bc1a2","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":3638630257},{"type":"v1","addr":"192.168.123.100:6825","nonce":3638630257}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":3638630257},{"type":"v1","addr":"192.168.123.100:6827","nonce":3638630257}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":3638630257},{"type":"v1","addr":"192.168.123.100:6831","nonce":3638630257}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":3638630257},{"type":"v1","addr":"192.168.123.100:6829","nonce":3638630257}]},"public_addr":"192.168.123.100:6825/3638630257","cluster_addr":"192.168.123.100:6827/3638630257","heartbeat_back_addr":"192.168.123.100:6831/3638630257","heartbeat_front_addr":"192.168.123.100:6829/3638630257","state":["exists","up"]},{"osd":2,"uuid":"4ce3087e-e158-489d-a219-191c29b36a54","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":3910428362},{"type":"v1","addr":"192.168.123.100:6809","nonce":3910428362}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":3910428362},{"type":"v1","addr":"192.168.123.100:6811","nonce":3910428362}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":3910428362},{"type":"v1","addr":"192.168.123.100:6816","nonce":3910428362}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":3910428362},{"type":"v1","addr":"192.168.123.100:6813","nonce":3910428362}]},"public_addr":"192.168.123.100:6809/3910428362","cluster_addr":"192.168.123.100:6811/3910428362","heartbeat_back_addr":"192.168.123.100:6816/3910428362","heartbeat_front_addr":"192.168.123.100:6813/3910428362","state":["exists","up"]},{"osd":3,"uuid":"44ee3fbb-f0f9-4157-9ecf-93ee38c1e226","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6815","nonce":872239044},{"type":"v1","addr":"192.168.123.100:6817","nonce":872239044}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":872239044},{"type":"v1","addr":"192.168.123.100:6819","nonce":872239044}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":872239044},{"type":"v1","addr":"192.168.123.100:6823","nonce":872239044}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":872239044},{"type":"v1","addr":"192.168.123.100:6821","nonce":872239044}]},"public_addr":"192.168.123.100:6817/872239044","cluster_addr":"192.168.123.100:6819/872239044","heartbeat_back_addr":"192.168.123.100:6823/872239044","heartbeat_front_addr":"192.168.123.100:6821/872239044","state":["exists","up"]},{"osd":4,"uuid":"19b44160-7ce5-4aab-8365-8fd1def68987","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6808","nonce":1114859926},{"type":"v1","addr":"192.168.123.106:6809","nonce":1114859926}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6810","nonce":1114859926},{"type":"v1","addr":"192.168.123.106:6811","nonce":1114859926}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6814","nonce":1114859926},{"type":"v1","addr":"192.168.123.106:6815","nonce":1114859926}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6812","nonce":1114859926},{"type":"v1","addr":"192.168.123.106:6813","nonce":1114859926}]},"public_addr":"192.168.123.106:6809/1114859926","cluster_addr":"192.168.123.106:6811/1114859926","heartbeat_back_addr":"192.168.123.106:6815/1114859926","heartbeat_front_addr":"192.168.123.106:6813/1114859926","state":["exists","up"]},{"osd":5,"uuid":"4ca07e3d-e23b-4818-9b37-467e81c522a4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6816","nonce":2245730692},{"type":"v1","addr":"192.168.123.106:6817","nonce":2245730692}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6818","nonce":2245730692},{"type":"v1","addr":"192.168.123.106:6819","nonce":2245730692}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6822","nonce":2245730692},{"type":"v1","addr":"192.168.123.106:6823","nonce":2245730692}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6820","nonce":2245730692},{"type":"v1","addr":"192.168.123.106:6821","nonce":2245730692}]},"public_addr":"192.168.123.106:6817/2245730692","cluster_addr":"192.168.123.106:6819/2245730692","heartbeat_back_addr":"192.168.123.106:6823/2245730692","heartbeat_front_addr":"192.168.123.106:6821/2245730692","state":["exists","up"]},{"osd":6,"uuid":"fe447377-8766-43d5-8dea-7d378e56c784","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6800","nonce":3650942598},{"type":"v1","addr":"192.168.123.106:6801","nonce":3650942598}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6802","nonce":3650942598},{"type":"v1","addr":"192.168.123.106:6803","nonce":3650942598}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6806","nonce":3650942598},{"type":"v1","addr":"192.168.123.106:6807","nonce":3650942598}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6804","nonce":3650942598},{"type":"v1","addr":"192.168.123.106:6805","nonce":3650942598}]},"public_addr":"192.168.123.106:6801/3650942598","cluster_addr":"192.168.123.106:6803/3650942598","heartbeat_back_addr":"192.168.123.106:6807/3650942598","heartbeat_front_addr":"192.168.123.106:6805/3650942598","state":["exists","up"]},{"osd":7,"uuid":"347eae2d-dfdd-44d0-8165-f778e8d870e4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6824","nonce":894001917},{"type":"v1","addr":"192.168.123.106:6825","nonce":894001917}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6826","nonce":894001917},{"type":"v1","addr":"192.168.123.106:6827","nonce":894001917}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6830","nonce":894001917},{"type":"v1","addr":"192.168.123.106:6831","nonce":894001917}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6828","nonce":894001917},{"type":"v1","addr":"192.168.123.106:6829","nonce":894001917}]},"public_addr":"192.168.123.106:6825/894001917","cluster_addr":"192.168.123.106:6827/894001917","heartbeat_back_addr":"192.168.123.106:6831/894001917","heartbeat_front_addr":"192.168.123.106:6829/894001917","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T12:40:56.697455+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T12:40:56.662947+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T12:40:56.592261+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T12:40:56.745964+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-20T12:41:07.245 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.0 flush_pg_stats 2026-03-20T12:41:07.245 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.1 flush_pg_stats 2026-03-20T12:41:07.245 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.2 flush_pg_stats 2026-03-20T12:41:07.245 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.3 flush_pg_stats 2026-03-20T12:41:07.245 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.4 flush_pg_stats 2026-03-20T12:41:07.245 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.5 flush_pg_stats 2026-03-20T12:41:07.245 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.6 flush_pg_stats 2026-03-20T12:41:07.246 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.7 flush_pg_stats 2026-03-20T12:41:07.412 INFO:teuthology.orchestra.run.vm00.stdout:55834574852 2026-03-20T12:41:07.412 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.4 2026-03-20T12:41:07.421 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T12:41:07.422 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.7 2026-03-20T12:41:07.465 INFO:teuthology.orchestra.run.vm00.stdout:55834574852 2026-03-20T12:41:07.465 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.2 2026-03-20T12:41:07.489 INFO:teuthology.orchestra.run.vm00.stdout:55834574852 2026-03-20T12:41:07.489 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.3 2026-03-20T12:41:07.501 INFO:teuthology.orchestra.run.vm00.stdout:55834574852 2026-03-20T12:41:07.501 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.0 2026-03-20T12:41:07.501 INFO:teuthology.orchestra.run.vm00.stdout:55834574852 2026-03-20T12:41:07.501 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.1 2026-03-20T12:41:07.508 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T12:41:07.509 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.5 2026-03-20T12:41:07.532 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T12:41:07.532 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.6 2026-03-20T12:41:07.772 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T12:41:07.785 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574852 got 55834574851 for osd.4 2026-03-20T12:41:07.836 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T12:41:07.849 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574852 got 55834574851 for osd.2 2026-03-20T12:41:07.873 INFO:teuthology.orchestra.run.vm00.stdout:55834574850 2026-03-20T12:41:07.884 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T12:41:07.896 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574850 for osd.7 2026-03-20T12:41:07.896 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T12:41:07.904 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T12:41:07.907 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574852 got 55834574851 for osd.0 2026-03-20T12:41:07.918 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574852 got 55834574851 for osd.1 2026-03-20T12:41:07.920 INFO:teuthology.orchestra.run.vm00.stdout:55834574850 2026-03-20T12:41:07.922 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574852 got 55834574851 for osd.3 2026-03-20T12:41:07.932 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574850 for osd.5 2026-03-20T12:41:07.967 INFO:teuthology.orchestra.run.vm00.stdout:55834574850 2026-03-20T12:41:07.979 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574850 for osd.6 2026-03-20T12:41:08.786 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.4 2026-03-20T12:41:08.850 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.2 2026-03-20T12:41:08.897 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.7 2026-03-20T12:41:08.907 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.0 2026-03-20T12:41:08.919 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.1 2026-03-20T12:41:08.922 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.3 2026-03-20T12:41:08.932 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.5 2026-03-20T12:41:08.980 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.6 2026-03-20T12:41:09.047 INFO:teuthology.orchestra.run.vm00.stdout:55834574852 2026-03-20T12:41:09.074 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574852 got 55834574852 for osd.4 2026-03-20T12:41:09.074 DEBUG:teuthology.parallel:result is None 2026-03-20T12:41:09.170 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T12:41:09.191 INFO:teuthology.orchestra.run.vm00.stdout:55834574852 2026-03-20T12:41:09.194 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574851 for osd.7 2026-03-20T12:41:09.194 DEBUG:teuthology.parallel:result is None 2026-03-20T12:41:09.215 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574852 got 55834574852 for osd.2 2026-03-20T12:41:09.216 DEBUG:teuthology.parallel:result is None 2026-03-20T12:41:09.261 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T12:41:09.276 INFO:teuthology.orchestra.run.vm00.stdout:55834574852 2026-03-20T12:41:09.282 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T12:41:09.290 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574851 for osd.6 2026-03-20T12:41:09.290 DEBUG:teuthology.parallel:result is None 2026-03-20T12:41:09.291 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574852 got 55834574852 for osd.0 2026-03-20T12:41:09.291 DEBUG:teuthology.parallel:result is None 2026-03-20T12:41:09.298 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574851 for osd.5 2026-03-20T12:41:09.298 DEBUG:teuthology.parallel:result is None 2026-03-20T12:41:09.302 INFO:teuthology.orchestra.run.vm00.stdout:55834574852 2026-03-20T12:41:09.318 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574852 got 55834574852 for osd.1 2026-03-20T12:41:09.318 DEBUG:teuthology.parallel:result is None 2026-03-20T12:41:09.333 INFO:teuthology.orchestra.run.vm00.stdout:55834574852 2026-03-20T12:41:09.348 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574852 got 55834574852 for osd.3 2026-03-20T12:41:09.348 DEBUG:teuthology.parallel:result is None 2026-03-20T12:41:09.348 INFO:tasks.ceph.ceph_manager.ceph:waiting for clean 2026-03-20T12:41:09.348 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-03-20T12:41:09.579 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:41:09.579 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-20T12:41:09.589 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":19,"stamp":"2026-03-20T12:41:07.684284+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":590387,"num_objects":4,"num_object_clones":0,"num_object_copies":8,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":126,"num_read_kb":109,"num_write":235,"num_write_kb":4762,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":95,"ondisk_log_size":95,"up":18,"acting":18,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":14,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":7,"kb":754974720,"kb_used":217620,"kb_used_data":2780,"kb_used_omap":66,"kb_used_meta":214461,"kb_avail":754757100,"statfs":{"total":773094113280,"available":772871270400,"internally_reserved":0,"allocated":2846720,"data_stored":1724958,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":68258,"internal_metadata":219608414},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"3.364710"},"pg_stats":[{"pgid":"2.7","version":"0'0","reported_seq":20,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T12:41:06.515851+0000","last_change":"2026-03-20T12:41:06.515985+0000","last_active":"2026-03-20T12:41:06.515851+0000","last_peered":"2026-03-20T12:41:06.515851+0000","last_clean":"2026-03-20T12:41:06.515851+0000","last_became_active":"2026-03-20T12:41:04.325601+0000","last_became_peered":"2026-03-20T12:41:04.325601+0000","last_unstale":"2026-03-20T12:41:06.515851+0000","last_undegraded":"2026-03-20T12:41:06.515851+0000","last_fullsized":"2026-03-20T12:41:06.515851+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_clean_scrub_stamp":"2026-03-20T12:41:03.305085+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T00:17:25.175560+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000276927,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,7],"acting":[6,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.6","version":"0'0","reported_seq":20,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T12:41:06.330593+0000","last_change":"2026-03-20T12:41:06.330786+0000","last_active":"2026-03-20T12:41:06.330593+0000","last_peered":"2026-03-20T12:41:06.330593+0000","last_clean":"2026-03-20T12:41:06.330593+0000","last_became_active":"2026-03-20T12:41:04.323784+0000","last_became_peered":"2026-03-20T12:41:04.323784+0000","last_unstale":"2026-03-20T12:41:06.330593+0000","last_undegraded":"2026-03-20T12:41:06.330593+0000","last_fullsized":"2026-03-20T12:41:06.330593+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_clean_scrub_stamp":"2026-03-20T12:41:03.305085+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T14:53:50.675130+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00032005,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6],"acting":[1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.5","version":"0'0","reported_seq":18,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T12:41:06.836312+0000","last_change":"2026-03-20T12:41:06.836444+0000","last_active":"2026-03-20T12:41:06.836312+0000","last_peered":"2026-03-20T12:41:06.836312+0000","last_clean":"2026-03-20T12:41:06.836312+0000","last_became_active":"2026-03-20T12:41:04.326456+0000","last_became_peered":"2026-03-20T12:41:04.326456+0000","last_unstale":"2026-03-20T12:41:06.836312+0000","last_undegraded":"2026-03-20T12:41:06.836312+0000","last_fullsized":"2026-03-20T12:41:06.836312+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_clean_scrub_stamp":"2026-03-20T12:41:03.305085+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T15:25:14.529105+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00032438699999999998,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0],"acting":[7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.4","version":"0'0","reported_seq":20,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T12:41:06.330610+0000","last_change":"2026-03-20T12:41:06.330700+0000","last_active":"2026-03-20T12:41:06.330610+0000","last_peered":"2026-03-20T12:41:06.330610+0000","last_clean":"2026-03-20T12:41:06.330610+0000","last_became_active":"2026-03-20T12:41:04.322482+0000","last_became_peered":"2026-03-20T12:41:04.322482+0000","last_unstale":"2026-03-20T12:41:06.330610+0000","last_undegraded":"2026-03-20T12:41:06.330610+0000","last_fullsized":"2026-03-20T12:41:06.330610+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_clean_scrub_stamp":"2026-03-20T12:41:03.305085+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T16:30:08.514302+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00021742699999999999,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.2","version":"21'2","reported_seq":22,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T12:41:06.332383+0000","last_change":"2026-03-20T12:41:06.332383+0000","last_active":"2026-03-20T12:41:06.332383+0000","last_peered":"2026-03-20T12:41:06.332383+0000","last_clean":"2026-03-20T12:41:06.332383+0000","last_became_active":"2026-03-20T12:41:04.325983+0000","last_became_peered":"2026-03-20T12:41:04.325983+0000","last_unstale":"2026-03-20T12:41:06.332383+0000","last_undegraded":"2026-03-20T12:41:06.332383+0000","last_fullsized":"2026-03-20T12:41:06.332383+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_clean_scrub_stamp":"2026-03-20T12:41:03.305085+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T16:48:27.828336+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00028123500000000001,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1],"acting":[5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.1","version":"0'0","reported_seq":20,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T12:41:06.464442+0000","last_change":"2026-03-20T12:41:06.464528+0000","last_active":"2026-03-20T12:41:06.464442+0000","last_peered":"2026-03-20T12:41:06.464442+0000","last_clean":"2026-03-20T12:41:06.464442+0000","last_became_active":"2026-03-20T12:41:04.350953+0000","last_became_peered":"2026-03-20T12:41:04.350953+0000","last_unstale":"2026-03-20T12:41:06.464442+0000","last_undegraded":"2026-03-20T12:41:06.464442+0000","last_fullsized":"2026-03-20T12:41:06.464442+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_clean_scrub_stamp":"2026-03-20T12:41:03.305085+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T22:52:35.881859+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00019890199999999999,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3],"acting":[2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.0","version":"0'0","reported_seq":18,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T12:41:06.836353+0000","last_change":"2026-03-20T12:41:06.836522+0000","last_active":"2026-03-20T12:41:06.836353+0000","last_peered":"2026-03-20T12:41:06.836353+0000","last_clean":"2026-03-20T12:41:06.836353+0000","last_became_active":"2026-03-20T12:41:04.325926+0000","last_became_peered":"2026-03-20T12:41:04.325926+0000","last_unstale":"2026-03-20T12:41:06.836353+0000","last_undegraded":"2026-03-20T12:41:06.836353+0000","last_fullsized":"2026-03-20T12:41:06.836353+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_clean_scrub_stamp":"2026-03-20T12:41:03.305085+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T17:18:33.315144+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00029899899999999999,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1],"acting":[7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.3","version":"19'1","reported_seq":21,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T12:41:06.332251+0000","last_change":"2026-03-20T12:41:06.332383+0000","last_active":"2026-03-20T12:41:06.332251+0000","last_peered":"2026-03-20T12:41:06.332251+0000","last_clean":"2026-03-20T12:41:06.332251+0000","last_became_active":"2026-03-20T12:41:04.324695+0000","last_became_peered":"2026-03-20T12:41:04.324695+0000","last_unstale":"2026-03-20T12:41:06.332251+0000","last_undegraded":"2026-03-20T12:41:06.332251+0000","last_fullsized":"2026-03-20T12:41:06.332251+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_clean_scrub_stamp":"2026-03-20T12:41:03.305085+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T16:40:41.879545+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00028620400000000003,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2],"acting":[5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.0","version":"16'192","reported_seq":248,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T12:41:06.836267+0000","last_change":"2026-03-20T12:41:01.421961+0000","last_active":"2026-03-20T12:41:06.836267+0000","last_peered":"2026-03-20T12:41:06.836267+0000","last_clean":"2026-03-20T12:41:06.836267+0000","last_became_active":"2026-03-20T12:41:01.421696+0000","last_became_peered":"2026-03-20T12:41:01.421696+0000","last_unstale":"2026-03-20T12:41:06.836267+0000","last_undegraded":"2026-03-20T12:41:06.836267+0000","last_fullsized":"2026-03-20T12:41:06.836267+0000","mapping_epoch":15,"log_start":"16'100","ondisk_log_start":"16'100","created":15,"last_epoch_clean":16,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T12:41:00.282737+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T12:41:00.282737+0000","last_clean_scrub_stamp":"2026-03-20T12:41:00.282737+0000","objects_scrubbed":0,"log_size":92,"log_dups_size":100,"ondisk_log_size":92,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T16:31:39.669657+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":590368,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":126,"num_read_kb":109,"num_write":233,"num_write_kb":4760,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0],"acting":[7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":8,"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":38,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":3,"ondisk_log_size":3,"up":16,"acting":16,"num_store_stats":7},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":590368,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":126,"num_read_kb":109,"num_write":233,"num_write_kb":4760,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1187840,"data_stored":1180736,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":92,"ondisk_log_size":92,"up":2,"acting":2,"num_store_stats":2}],"osd_stats":[{"osd":7,"up_from":13,"seq":55834574851,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27624,"kb_used_data":768,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":94344216,"statfs":{"total":96636764160,"available":96608477184,"internally_reserved":0,"allocated":786432,"data_stored":651911,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8122,"internal_metadata":27451462},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]},{"osd":6,"up_from":13,"seq":55834574851,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27064,"kb_used_data":200,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":94344776,"statfs":{"total":96636764160,"available":96609050624,"internally_reserved":0,"allocated":204800,"data_stored":67275,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8126,"internal_metadata":27451458},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":13,"seq":55834574851,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27064,"kb_used_data":216,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344776,"statfs":{"total":96636764160,"available":96609050624,"internally_reserved":0,"allocated":221184,"data_stored":73110,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8777,"internal_metadata":27450807},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":13,"seq":55834574852,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":94371840,"kb_used":27064,"kb_used_data":200,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344776,"statfs":{"total":96636764160,"available":96609050624,"internally_reserved":0,"allocated":204800,"data_stored":67275,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8777,"internal_metadata":27450807},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":13,"seq":55834574852,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27052,"kb_used_data":212,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":94344788,"statfs":{"total":96636764160,"available":96609062912,"internally_reserved":0,"allocated":217088,"data_stored":73091,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8125,"internal_metadata":27451459},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":13,"seq":55834574852,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27064,"kb_used_data":200,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344776,"statfs":{"total":96636764160,"available":96609050624,"internally_reserved":0,"allocated":204800,"data_stored":67275,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8777,"internal_metadata":27450807},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574852,"num_pgs":4,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27064,"kb_used_data":216,"kb_used_omap":9,"kb_used_meta":26806,"kb_avail":94344776,"statfs":{"total":96636764160,"available":96609050624,"internally_reserved":0,"allocated":221184,"data_stored":73110,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":9427,"internal_metadata":27450157},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":13,"seq":55834574852,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27624,"kb_used_data":768,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":94344216,"statfs":{"total":96636764160,"available":96608477184,"internally_reserved":0,"allocated":786432,"data_stored":651911,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8127,"internal_metadata":27451457},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":593920,"data_stored":590368,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":593920,"data_stored":590368,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-20T12:41:09.590 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-03-20T12:41:09.777 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:41:09.777 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-20T12:41:09.788 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":20,"stamp":"2026-03-20T12:41:09.684559+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":590387,"num_objects":4,"num_object_clones":0,"num_object_copies":8,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":126,"num_read_kb":109,"num_write":235,"num_write_kb":4762,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":95,"ondisk_log_size":95,"up":18,"acting":18,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":18,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":7,"kb":754974720,"kb_used":217624,"kb_used_data":2864,"kb_used_omap":69,"kb_used_meta":214458,"kb_avail":754757096,"statfs":{"total":773094113280,"available":772871266304,"internally_reserved":0,"allocated":2932736,"data_stored":1765502,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":71508,"internal_metadata":219605164},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"5.364985"},"pg_stats":[{"pgid":"2.7","version":"0'0","reported_seq":20,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T12:41:06.515851+0000","last_change":"2026-03-20T12:41:06.515985+0000","last_active":"2026-03-20T12:41:06.515851+0000","last_peered":"2026-03-20T12:41:06.515851+0000","last_clean":"2026-03-20T12:41:06.515851+0000","last_became_active":"2026-03-20T12:41:04.325601+0000","last_became_peered":"2026-03-20T12:41:04.325601+0000","last_unstale":"2026-03-20T12:41:06.515851+0000","last_undegraded":"2026-03-20T12:41:06.515851+0000","last_fullsized":"2026-03-20T12:41:06.515851+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_clean_scrub_stamp":"2026-03-20T12:41:03.305085+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T00:17:25.175560+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000276927,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,7],"acting":[6,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.6","version":"0'0","reported_seq":20,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T12:41:06.330593+0000","last_change":"2026-03-20T12:41:06.330786+0000","last_active":"2026-03-20T12:41:06.330593+0000","last_peered":"2026-03-20T12:41:06.330593+0000","last_clean":"2026-03-20T12:41:06.330593+0000","last_became_active":"2026-03-20T12:41:04.323784+0000","last_became_peered":"2026-03-20T12:41:04.323784+0000","last_unstale":"2026-03-20T12:41:06.330593+0000","last_undegraded":"2026-03-20T12:41:06.330593+0000","last_fullsized":"2026-03-20T12:41:06.330593+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_clean_scrub_stamp":"2026-03-20T12:41:03.305085+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T14:53:50.675130+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00032005,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6],"acting":[1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.5","version":"0'0","reported_seq":18,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T12:41:06.836312+0000","last_change":"2026-03-20T12:41:06.836444+0000","last_active":"2026-03-20T12:41:06.836312+0000","last_peered":"2026-03-20T12:41:06.836312+0000","last_clean":"2026-03-20T12:41:06.836312+0000","last_became_active":"2026-03-20T12:41:04.326456+0000","last_became_peered":"2026-03-20T12:41:04.326456+0000","last_unstale":"2026-03-20T12:41:06.836312+0000","last_undegraded":"2026-03-20T12:41:06.836312+0000","last_fullsized":"2026-03-20T12:41:06.836312+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_clean_scrub_stamp":"2026-03-20T12:41:03.305085+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T15:25:14.529105+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00032438699999999998,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0],"acting":[7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.4","version":"0'0","reported_seq":20,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T12:41:06.330610+0000","last_change":"2026-03-20T12:41:06.330700+0000","last_active":"2026-03-20T12:41:06.330610+0000","last_peered":"2026-03-20T12:41:06.330610+0000","last_clean":"2026-03-20T12:41:06.330610+0000","last_became_active":"2026-03-20T12:41:04.322482+0000","last_became_peered":"2026-03-20T12:41:04.322482+0000","last_unstale":"2026-03-20T12:41:06.330610+0000","last_undegraded":"2026-03-20T12:41:06.330610+0000","last_fullsized":"2026-03-20T12:41:06.330610+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_clean_scrub_stamp":"2026-03-20T12:41:03.305085+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T16:30:08.514302+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00021742699999999999,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.2","version":"21'2","reported_seq":22,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T12:41:06.332383+0000","last_change":"2026-03-20T12:41:06.332383+0000","last_active":"2026-03-20T12:41:06.332383+0000","last_peered":"2026-03-20T12:41:06.332383+0000","last_clean":"2026-03-20T12:41:06.332383+0000","last_became_active":"2026-03-20T12:41:04.325983+0000","last_became_peered":"2026-03-20T12:41:04.325983+0000","last_unstale":"2026-03-20T12:41:06.332383+0000","last_undegraded":"2026-03-20T12:41:06.332383+0000","last_fullsized":"2026-03-20T12:41:06.332383+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_clean_scrub_stamp":"2026-03-20T12:41:03.305085+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T16:48:27.828336+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00028123500000000001,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1],"acting":[5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.1","version":"0'0","reported_seq":20,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T12:41:06.464442+0000","last_change":"2026-03-20T12:41:06.464528+0000","last_active":"2026-03-20T12:41:06.464442+0000","last_peered":"2026-03-20T12:41:06.464442+0000","last_clean":"2026-03-20T12:41:06.464442+0000","last_became_active":"2026-03-20T12:41:04.350953+0000","last_became_peered":"2026-03-20T12:41:04.350953+0000","last_unstale":"2026-03-20T12:41:06.464442+0000","last_undegraded":"2026-03-20T12:41:06.464442+0000","last_fullsized":"2026-03-20T12:41:06.464442+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_clean_scrub_stamp":"2026-03-20T12:41:03.305085+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T22:52:35.881859+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00019890199999999999,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3],"acting":[2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.0","version":"0'0","reported_seq":18,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T12:41:06.836353+0000","last_change":"2026-03-20T12:41:06.836522+0000","last_active":"2026-03-20T12:41:06.836353+0000","last_peered":"2026-03-20T12:41:06.836353+0000","last_clean":"2026-03-20T12:41:06.836353+0000","last_became_active":"2026-03-20T12:41:04.325926+0000","last_became_peered":"2026-03-20T12:41:04.325926+0000","last_unstale":"2026-03-20T12:41:06.836353+0000","last_undegraded":"2026-03-20T12:41:06.836353+0000","last_fullsized":"2026-03-20T12:41:06.836353+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_clean_scrub_stamp":"2026-03-20T12:41:03.305085+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T17:18:33.315144+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00029899899999999999,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1],"acting":[7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.3","version":"19'1","reported_seq":21,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T12:41:06.332251+0000","last_change":"2026-03-20T12:41:06.332383+0000","last_active":"2026-03-20T12:41:06.332251+0000","last_peered":"2026-03-20T12:41:06.332251+0000","last_clean":"2026-03-20T12:41:06.332251+0000","last_became_active":"2026-03-20T12:41:04.324695+0000","last_became_peered":"2026-03-20T12:41:04.324695+0000","last_unstale":"2026-03-20T12:41:06.332251+0000","last_undegraded":"2026-03-20T12:41:06.332251+0000","last_fullsized":"2026-03-20T12:41:06.332251+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T12:41:03.305085+0000","last_clean_scrub_stamp":"2026-03-20T12:41:03.305085+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T16:40:41.879545+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00028620400000000003,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2],"acting":[5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.0","version":"16'192","reported_seq":248,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T12:41:06.836267+0000","last_change":"2026-03-20T12:41:01.421961+0000","last_active":"2026-03-20T12:41:06.836267+0000","last_peered":"2026-03-20T12:41:06.836267+0000","last_clean":"2026-03-20T12:41:06.836267+0000","last_became_active":"2026-03-20T12:41:01.421696+0000","last_became_peered":"2026-03-20T12:41:01.421696+0000","last_unstale":"2026-03-20T12:41:06.836267+0000","last_undegraded":"2026-03-20T12:41:06.836267+0000","last_fullsized":"2026-03-20T12:41:06.836267+0000","mapping_epoch":15,"log_start":"16'100","ondisk_log_start":"16'100","created":15,"last_epoch_clean":16,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T12:41:00.282737+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T12:41:00.282737+0000","last_clean_scrub_stamp":"2026-03-20T12:41:00.282737+0000","objects_scrubbed":0,"log_size":92,"log_dups_size":100,"ondisk_log_size":92,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T16:31:39.669657+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":590368,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":126,"num_read_kb":109,"num_write":233,"num_write_kb":4760,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0],"acting":[7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":8,"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":38,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":3,"ondisk_log_size":3,"up":16,"acting":16,"num_store_stats":7},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":590368,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":126,"num_read_kb":109,"num_write":233,"num_write_kb":4760,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1187840,"data_stored":1180736,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":92,"ondisk_log_size":92,"up":2,"acting":2,"num_store_stats":2}],"osd_stats":[{"osd":7,"up_from":13,"seq":55834574852,"num_pgs":4,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27632,"kb_used_data":792,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344208,"statfs":{"total":96636764160,"available":96608468992,"internally_reserved":0,"allocated":811008,"data_stored":663459,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8772,"internal_metadata":27450812},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":13,"seq":55834574852,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27060,"kb_used_data":212,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344780,"statfs":{"total":96636764160,"available":96609054720,"internally_reserved":0,"allocated":217088,"data_stored":73091,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8776,"internal_metadata":27450808},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":13,"seq":55834574852,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27064,"kb_used_data":216,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344776,"statfs":{"total":96636764160,"available":96609050624,"internally_reserved":0,"allocated":221184,"data_stored":73110,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8777,"internal_metadata":27450807},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":13,"seq":55834574853,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":94371840,"kb_used":27060,"kb_used_data":212,"kb_used_omap":9,"kb_used_meta":26806,"kb_avail":94344780,"statfs":{"total":96636764160,"available":96609054720,"internally_reserved":0,"allocated":217088,"data_stored":73091,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":9427,"internal_metadata":27450157},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":13,"seq":55834574853,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27052,"kb_used_data":212,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":94344788,"statfs":{"total":96636764160,"available":96609062912,"internally_reserved":0,"allocated":217088,"data_stored":73091,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8125,"internal_metadata":27451459},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":13,"seq":55834574853,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27060,"kb_used_data":212,"kb_used_omap":9,"kb_used_meta":26806,"kb_avail":94344780,"statfs":{"total":96636764160,"available":96609054720,"internally_reserved":0,"allocated":217088,"data_stored":73091,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":9427,"internal_metadata":27450157},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574853,"num_pgs":4,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27064,"kb_used_data":216,"kb_used_omap":9,"kb_used_meta":26806,"kb_avail":94344776,"statfs":{"total":96636764160,"available":96609050624,"internally_reserved":0,"allocated":221184,"data_stored":73110,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":9427,"internal_metadata":27450157},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":13,"seq":55834574853,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27632,"kb_used_data":792,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344208,"statfs":{"total":96636764160,"available":96608468992,"internally_reserved":0,"allocated":811008,"data_stored":663459,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8777,"internal_metadata":27450807},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":593920,"data_stored":590368,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":593920,"data_stored":590368,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-20T12:41:09.789 INFO:tasks.ceph.ceph_manager.ceph:clean! 2026-03-20T12:41:09.789 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-20T12:41:09.789 INFO:tasks.ceph.ceph_manager.ceph:wait_until_healthy 2026-03-20T12:41:09.789 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph health --format=json 2026-03-20T12:41:10.015 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:41:10.015 INFO:teuthology.orchestra.run.vm00.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-20T12:41:10.027 INFO:tasks.ceph.ceph_manager.ceph:wait_until_healthy done 2026-03-20T12:41:10.027 INFO:teuthology.run_tasks:Running task openssl_keys... 2026-03-20T12:41:10.030 INFO:teuthology.run_tasks:Running task rgw... 2026-03-20T12:41:10.034 DEBUG:tasks.rgw:config is {'client.0': None, 'client.1': None, 'client.2': None} 2026-03-20T12:41:10.034 DEBUG:tasks.rgw:client list is dict_keys(['client.0', 'client.1', 'client.2']) 2026-03-20T12:41:10.034 INFO:tasks.rgw:Creating data pools 2026-03-20T12:41:10.034 DEBUG:tasks.rgw:Obtaining remote for client client.0 2026-03-20T12:41:10.034 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph osd pool create default.rgw.buckets.data 64 64 --cluster ceph 2026-03-20T12:41:10.346 INFO:teuthology.orchestra.run.vm00.stderr:pool 'default.rgw.buckets.data' created 2026-03-20T12:41:10.370 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph osd pool application enable default.rgw.buckets.data rgw --cluster ceph 2026-03-20T12:41:11.370 INFO:teuthology.orchestra.run.vm00.stderr:enabled application 'rgw' on pool 'default.rgw.buckets.data' 2026-03-20T12:41:11.392 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph osd pool create default.rgw.buckets.index 64 64 --cluster ceph 2026-03-20T12:41:12.379 INFO:teuthology.orchestra.run.vm00.stderr:pool 'default.rgw.buckets.index' created 2026-03-20T12:41:12.404 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph osd pool application enable default.rgw.buckets.index rgw --cluster ceph 2026-03-20T12:41:13.042 INFO:teuthology.orchestra.run.vm00.stderr:enabled application 'rgw' on pool 'default.rgw.buckets.index' 2026-03-20T12:41:13.059 DEBUG:tasks.rgw:Obtaining remote for client client.1 2026-03-20T12:41:13.059 DEBUG:teuthology.orchestra.run.vm06:> sudo ceph osd pool create default.rgw.buckets.data 64 64 --cluster ceph 2026-03-20T12:41:13.262 INFO:teuthology.orchestra.run.vm06.stderr:pool 'default.rgw.buckets.data' already exists 2026-03-20T12:41:13.274 DEBUG:teuthology.orchestra.run.vm06:> sudo ceph osd pool application enable default.rgw.buckets.data rgw --cluster ceph 2026-03-20T12:41:14.035 INFO:teuthology.orchestra.run.vm06.stderr:enabled application 'rgw' on pool 'default.rgw.buckets.data' 2026-03-20T12:41:14.047 DEBUG:teuthology.orchestra.run.vm06:> sudo ceph osd pool create default.rgw.buckets.index 64 64 --cluster ceph 2026-03-20T12:41:14.243 INFO:teuthology.orchestra.run.vm06.stderr:pool 'default.rgw.buckets.index' already exists 2026-03-20T12:41:14.254 DEBUG:teuthology.orchestra.run.vm06:> sudo ceph osd pool application enable default.rgw.buckets.index rgw --cluster ceph 2026-03-20T12:41:15.392 INFO:teuthology.orchestra.run.vm06.stderr:enabled application 'rgw' on pool 'default.rgw.buckets.index' 2026-03-20T12:41:15.406 DEBUG:tasks.rgw:Obtaining remote for client client.2 2026-03-20T12:41:15.406 DEBUG:teuthology.orchestra.run.vm09:> sudo ceph osd pool create default.rgw.buckets.data 64 64 --cluster ceph 2026-03-20T12:41:15.607 INFO:teuthology.orchestra.run.vm09.stderr:pool 'default.rgw.buckets.data' already exists 2026-03-20T12:41:15.620 DEBUG:teuthology.orchestra.run.vm09:> sudo ceph osd pool application enable default.rgw.buckets.data rgw --cluster ceph 2026-03-20T12:41:16.399 INFO:teuthology.orchestra.run.vm09.stderr:enabled application 'rgw' on pool 'default.rgw.buckets.data' 2026-03-20T12:41:16.411 DEBUG:teuthology.orchestra.run.vm09:> sudo ceph osd pool create default.rgw.buckets.index 64 64 --cluster ceph 2026-03-20T12:41:16.605 INFO:teuthology.orchestra.run.vm09.stderr:pool 'default.rgw.buckets.index' already exists 2026-03-20T12:41:16.617 DEBUG:teuthology.orchestra.run.vm09:> sudo ceph osd pool application enable default.rgw.buckets.index rgw --cluster ceph 2026-03-20T12:41:17.408 INFO:teuthology.orchestra.run.vm09.stderr:enabled application 'rgw' on pool 'default.rgw.buckets.index' 2026-03-20T12:41:17.420 DEBUG:tasks.rgw:Pools created 2026-03-20T12:41:17.420 INFO:tasks.util.rgw:rgwadmin: client.0 : ['user', 'list'] 2026-03-20T12:41:17.420 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.0', '--cluster', 'ceph', 'user', 'list'] 2026-03-20T12:41:17.420 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph user list 2026-03-20T12:41:17.458 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-03-20T12:41:17.458 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T12:41:19.469 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.469+0000 7f4d6b3d0900 20 rados->read ofs=0 len=0 2026-03-20T12:41:19.470 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.470+0000 7f4d6b3d0900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:19.470 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.470+0000 7f4d6b3d0900 20 realm 2026-03-20T12:41:19.470 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.470+0000 7f4d6b3d0900 20 rados->read ofs=0 len=0 2026-03-20T12:41:19.470 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.470+0000 7f4d6b3d0900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:19.470 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.470+0000 7f4d6b3d0900 4 RGWPeriod::init failed to init realm id : (2) No such file or directory 2026-03-20T12:41:19.470 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.470+0000 7f4d6b3d0900 20 rados->read ofs=0 len=0 2026-03-20T12:41:19.471 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.471+0000 7f4d6b3d0900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:19.471 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.471+0000 7f4d6b3d0900 20 rados->read ofs=0 len=0 2026-03-20T12:41:19.471 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.471+0000 7f4d6b3d0900 20 rados_obj.operate() r=0 bl.length=46 2026-03-20T12:41:19.471 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.471+0000 7f4d6b3d0900 20 rados->read ofs=0 len=0 2026-03-20T12:41:19.471 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.472+0000 7f4d6b3d0900 20 rados_obj.operate() r=0 bl.length=1060 2026-03-20T12:41:19.471 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.472+0000 7f4d6b3d0900 20 searching for the correct realm 2026-03-20T12:41:19.479 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.479+0000 7f4d6b3d0900 20 RGWRados::pool_iterate: got zone_info.39159d26-247c-45da-824e-10bd55c6de4d 2026-03-20T12:41:19.479 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.479+0000 7f4d6b3d0900 20 RGWRados::pool_iterate: got default.zonegroup. 2026-03-20T12:41:19.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.480+0000 7f4d6b3d0900 20 RGWRados::pool_iterate: got default.zone. 2026-03-20T12:41:19.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.480+0000 7f4d6b3d0900 20 RGWRados::pool_iterate: got zone_names.default 2026-03-20T12:41:19.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.480+0000 7f4d6b3d0900 20 RGWRados::pool_iterate: got zonegroups_names.default 2026-03-20T12:41:19.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.480+0000 7f4d6b3d0900 20 RGWRados::pool_iterate: got zonegroup_info.9626b2cd-be7f-4e66-a24c-00fdcd8682d7 2026-03-20T12:41:19.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.480+0000 7f4d6b3d0900 20 rados->read ofs=0 len=0 2026-03-20T12:41:19.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.480+0000 7f4d6b3d0900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:19.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.480+0000 7f4d6b3d0900 20 rados->read ofs=0 len=0 2026-03-20T12:41:19.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.480+0000 7f4d6b3d0900 20 rados_obj.operate() r=0 bl.length=46 2026-03-20T12:41:19.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.480+0000 7f4d6b3d0900 20 rados->read ofs=0 len=0 2026-03-20T12:41:19.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.481+0000 7f4d6b3d0900 20 rados_obj.operate() r=0 bl.length=436 2026-03-20T12:41:19.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.481+0000 7f4d6b3d0900 20 zone default found 2026-03-20T12:41:19.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.481+0000 7f4d6b3d0900 4 Realm: () 2026-03-20T12:41:19.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.481+0000 7f4d6b3d0900 4 ZoneGroup: default (9626b2cd-be7f-4e66-a24c-00fdcd8682d7) 2026-03-20T12:41:19.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.481+0000 7f4d6b3d0900 4 Zone: default (39159d26-247c-45da-824e-10bd55c6de4d) 2026-03-20T12:41:19.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.481+0000 7f4d6b3d0900 10 cannot find current period zonegroup using local zonegroup configuration 2026-03-20T12:41:19.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.481+0000 7f4d6b3d0900 20 zonegroup default 2026-03-20T12:41:19.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.481+0000 7f4d6b3d0900 20 rados->read ofs=0 len=0 2026-03-20T12:41:19.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.481+0000 7f4d6b3d0900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:19.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:19.481+0000 7f4d6b3d0900 20 rados->read ofs=0 len=0 2026-03-20T12:41:21.454 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:21.454+0000 7f4d6b3d0900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:21.454 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:21.454+0000 7f4d6b3d0900 20 rados->read ofs=0 len=0 2026-03-20T12:41:21.455 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:21.455+0000 7f4d6b3d0900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:21.455 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:21.455+0000 7f4d6b3d0900 20 started sync module instance, tier type = 2026-03-20T12:41:21.455 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:21.455+0000 7f4d6b3d0900 20 started zone id=39159d26-247c-45da-824e-10bd55c6de4d (name=default) with tier type = 2026-03-20T12:41:23.459 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.459+0000 7f4d6b3d0900 20 add_watcher() i=2 2026-03-20T12:41:23.460 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.460+0000 7f4d6b3d0900 20 add_watcher() i=1 2026-03-20T12:41:23.461 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.461+0000 7f4d6b3d0900 20 add_watcher() i=0 2026-03-20T12:41:23.461 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.461+0000 7f4d6b3d0900 20 add_watcher() i=4 2026-03-20T12:41:23.464 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.464+0000 7f4d6b3d0900 20 add_watcher() i=3 2026-03-20T12:41:23.466 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.466+0000 7f4d6b3d0900 20 add_watcher() i=7 2026-03-20T12:41:23.468 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.468+0000 7f4d6b3d0900 20 add_watcher() i=5 2026-03-20T12:41:23.468 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.468+0000 7f4d6b3d0900 20 add_watcher() i=6 2026-03-20T12:41:23.468 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.468+0000 7f4d6b3d0900 2 all 8 watchers are set, enabling cache 2026-03-20T12:41:23.470 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.470+0000 7f4d577fe640 5 boost::asio::awaitable, obj_version> > logback_generations::read(const DoutPrefixProvider*):446: oid=data_loggenerations_metadata not found 2026-03-20T12:41:23.470 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.470+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.0 2026-03-20T12:41:23.470 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.470+0000 7f4d577fe640 20 do_open: entering 2026-03-20T12:41:23.471 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.471+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.0 does not exist 2026-03-20T12:41:23.471 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.471+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.1 2026-03-20T12:41:23.471 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.471+0000 7f4d56ffd640 20 do_open: entering 2026-03-20T12:41:23.472 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.472+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.1 does not exist 2026-03-20T12:41:23.472 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.472+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.2 2026-03-20T12:41:23.472 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.472+0000 7f4d567fc640 20 do_open: entering 2026-03-20T12:41:23.472 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.472+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.2 does not exist 2026-03-20T12:41:23.472 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.472+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.3 2026-03-20T12:41:23.472 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.472+0000 7f4d67182640 20 do_open: entering 2026-03-20T12:41:23.473 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.473+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.3 does not exist 2026-03-20T12:41:23.473 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.473+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.4 2026-03-20T12:41:23.473 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.473+0000 7f4d6517e640 20 do_open: entering 2026-03-20T12:41:23.473 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.473+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.4 does not exist 2026-03-20T12:41:23.473 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.473+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.5 2026-03-20T12:41:23.473 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.473+0000 7f4d6497d640 20 do_open: entering 2026-03-20T12:41:23.474 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.474+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.5 does not exist 2026-03-20T12:41:23.474 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.474+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.6 2026-03-20T12:41:23.474 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.474+0000 7f4d57fff640 20 do_open: entering 2026-03-20T12:41:23.474 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.475+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.6 does not exist 2026-03-20T12:41:23.474 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.475+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.7 2026-03-20T12:41:23.475 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.475+0000 7f4d6840b640 20 do_open: entering 2026-03-20T12:41:23.475 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.475+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.7 does not exist 2026-03-20T12:41:23.475 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.475+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.8 2026-03-20T12:41:23.475 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.475+0000 7f4d577fe640 20 do_open: entering 2026-03-20T12:41:23.475 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.475+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.8 does not exist 2026-03-20T12:41:23.475 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.475+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.9 2026-03-20T12:41:23.475 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.475+0000 7f4d56ffd640 20 do_open: entering 2026-03-20T12:41:23.475 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.475+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.9 does not exist 2026-03-20T12:41:23.475 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.475+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.10 2026-03-20T12:41:23.475 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.475+0000 7f4d567fc640 20 do_open: entering 2026-03-20T12:41:23.476 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.476+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.10 does not exist 2026-03-20T12:41:23.476 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.476+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.11 2026-03-20T12:41:23.476 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.476+0000 7f4d67182640 20 do_open: entering 2026-03-20T12:41:23.476 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.476+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.11 does not exist 2026-03-20T12:41:23.476 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.476+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.12 2026-03-20T12:41:23.476 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.476+0000 7f4d6517e640 20 do_open: entering 2026-03-20T12:41:23.476 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.476+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.12 does not exist 2026-03-20T12:41:23.476 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.476+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.13 2026-03-20T12:41:23.476 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.476+0000 7f4d6497d640 20 do_open: entering 2026-03-20T12:41:23.477 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.477+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.13 does not exist 2026-03-20T12:41:23.477 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.477+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.14 2026-03-20T12:41:23.477 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.477+0000 7f4d57fff640 20 do_open: entering 2026-03-20T12:41:23.477 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.477+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.14 does not exist 2026-03-20T12:41:23.477 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.477+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.15 2026-03-20T12:41:23.477 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.477+0000 7f4d6840b640 20 do_open: entering 2026-03-20T12:41:23.477 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.477+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.15 does not exist 2026-03-20T12:41:23.477 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.477+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.16 2026-03-20T12:41:23.477 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.477+0000 7f4d577fe640 20 do_open: entering 2026-03-20T12:41:23.477 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.477+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.16 does not exist 2026-03-20T12:41:23.477 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.477+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.17 2026-03-20T12:41:23.477 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.477+0000 7f4d56ffd640 20 do_open: entering 2026-03-20T12:41:23.477 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.478+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.17 does not exist 2026-03-20T12:41:23.478 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.478+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.18 2026-03-20T12:41:23.478 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.478+0000 7f4d567fc640 20 do_open: entering 2026-03-20T12:41:23.478 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.478+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.18 does not exist 2026-03-20T12:41:23.478 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.478+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.19 2026-03-20T12:41:23.478 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.478+0000 7f4d67182640 20 do_open: entering 2026-03-20T12:41:23.478 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.478+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.19 does not exist 2026-03-20T12:41:23.478 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.478+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.20 2026-03-20T12:41:23.478 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.478+0000 7f4d6517e640 20 do_open: entering 2026-03-20T12:41:23.478 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.478+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.20 does not exist 2026-03-20T12:41:23.478 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.478+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.21 2026-03-20T12:41:23.478 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.478+0000 7f4d6497d640 20 do_open: entering 2026-03-20T12:41:23.479 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.479+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.21 does not exist 2026-03-20T12:41:23.479 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.479+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.22 2026-03-20T12:41:23.479 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.479+0000 7f4d57fff640 20 do_open: entering 2026-03-20T12:41:23.479 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.479+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.22 does not exist 2026-03-20T12:41:23.479 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.479+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.23 2026-03-20T12:41:23.479 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.479+0000 7f4d6840b640 20 do_open: entering 2026-03-20T12:41:23.479 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.479+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.23 does not exist 2026-03-20T12:41:23.479 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.479+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.24 2026-03-20T12:41:23.479 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.479+0000 7f4d577fe640 20 do_open: entering 2026-03-20T12:41:23.479 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.479+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.24 does not exist 2026-03-20T12:41:23.479 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.479+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.25 2026-03-20T12:41:23.479 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.479+0000 7f4d56ffd640 20 do_open: entering 2026-03-20T12:41:23.479 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.480+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.25 does not exist 2026-03-20T12:41:23.479 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.480+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.26 2026-03-20T12:41:23.479 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.480+0000 7f4d567fc640 20 do_open: entering 2026-03-20T12:41:23.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.480+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.26 does not exist 2026-03-20T12:41:23.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.480+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.27 2026-03-20T12:41:23.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.480+0000 7f4d67182640 20 do_open: entering 2026-03-20T12:41:23.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.480+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.27 does not exist 2026-03-20T12:41:23.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.480+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.28 2026-03-20T12:41:23.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.480+0000 7f4d6517e640 20 do_open: entering 2026-03-20T12:41:23.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.480+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.28 does not exist 2026-03-20T12:41:23.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.480+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.29 2026-03-20T12:41:23.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.480+0000 7f4d6497d640 20 do_open: entering 2026-03-20T12:41:23.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.481+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.29 does not exist 2026-03-20T12:41:23.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.481+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.30 2026-03-20T12:41:23.480 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.481+0000 7f4d57fff640 20 do_open: entering 2026-03-20T12:41:23.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.481+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.30 does not exist 2026-03-20T12:41:23.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.481+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.31 2026-03-20T12:41:23.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.481+0000 7f4d6840b640 20 do_open: entering 2026-03-20T12:41:23.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.481+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.31 does not exist 2026-03-20T12:41:23.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.481+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.32 2026-03-20T12:41:23.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.481+0000 7f4d577fe640 20 do_open: entering 2026-03-20T12:41:23.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.481+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.32 does not exist 2026-03-20T12:41:23.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.481+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.33 2026-03-20T12:41:23.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.481+0000 7f4d56ffd640 20 do_open: entering 2026-03-20T12:41:23.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.481+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.33 does not exist 2026-03-20T12:41:23.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.481+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.34 2026-03-20T12:41:23.481 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.481+0000 7f4d567fc640 20 do_open: entering 2026-03-20T12:41:23.482 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.482+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.34 does not exist 2026-03-20T12:41:23.482 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.482+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.35 2026-03-20T12:41:23.482 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.482+0000 7f4d67182640 20 do_open: entering 2026-03-20T12:41:23.482 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.482+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.35 does not exist 2026-03-20T12:41:23.482 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.482+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.36 2026-03-20T12:41:23.482 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.482+0000 7f4d6517e640 20 do_open: entering 2026-03-20T12:41:23.482 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.482+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.36 does not exist 2026-03-20T12:41:23.482 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.482+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.37 2026-03-20T12:41:23.482 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.482+0000 7f4d6497d640 20 do_open: entering 2026-03-20T12:41:23.482 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.482+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.37 does not exist 2026-03-20T12:41:23.482 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.482+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.38 2026-03-20T12:41:23.482 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.482+0000 7f4d57fff640 20 do_open: entering 2026-03-20T12:41:23.482 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.483+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.38 does not exist 2026-03-20T12:41:23.482 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.483+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.39 2026-03-20T12:41:23.482 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.483+0000 7f4d6840b640 20 do_open: entering 2026-03-20T12:41:23.483 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.483+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.39 does not exist 2026-03-20T12:41:23.483 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.483+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.40 2026-03-20T12:41:23.483 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.483+0000 7f4d577fe640 20 do_open: entering 2026-03-20T12:41:23.483 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.483+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.40 does not exist 2026-03-20T12:41:23.483 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.483+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.41 2026-03-20T12:41:23.483 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.483+0000 7f4d56ffd640 20 do_open: entering 2026-03-20T12:41:23.483 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.483+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.41 does not exist 2026-03-20T12:41:23.483 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.483+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.42 2026-03-20T12:41:23.483 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.483+0000 7f4d567fc640 20 do_open: entering 2026-03-20T12:41:23.483 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.483+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.42 does not exist 2026-03-20T12:41:23.483 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.483+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.43 2026-03-20T12:41:23.483 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.483+0000 7f4d67182640 20 do_open: entering 2026-03-20T12:41:23.484 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.484+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.43 does not exist 2026-03-20T12:41:23.484 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.484+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.44 2026-03-20T12:41:23.484 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.484+0000 7f4d6517e640 20 do_open: entering 2026-03-20T12:41:23.484 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.484+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.44 does not exist 2026-03-20T12:41:23.484 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.484+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.45 2026-03-20T12:41:23.484 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.484+0000 7f4d6497d640 20 do_open: entering 2026-03-20T12:41:23.484 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.484+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.45 does not exist 2026-03-20T12:41:23.484 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.484+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.46 2026-03-20T12:41:23.484 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.484+0000 7f4d57fff640 20 do_open: entering 2026-03-20T12:41:23.484 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.484+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.46 does not exist 2026-03-20T12:41:23.484 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.484+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.47 2026-03-20T12:41:23.484 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.484+0000 7f4d6840b640 20 do_open: entering 2026-03-20T12:41:23.485 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.485+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.47 does not exist 2026-03-20T12:41:23.485 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.485+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.48 2026-03-20T12:41:23.485 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.485+0000 7f4d577fe640 20 do_open: entering 2026-03-20T12:41:23.485 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.485+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.48 does not exist 2026-03-20T12:41:23.485 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.485+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.49 2026-03-20T12:41:23.485 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.485+0000 7f4d56ffd640 20 do_open: entering 2026-03-20T12:41:23.485 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.485+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.49 does not exist 2026-03-20T12:41:23.485 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.485+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.50 2026-03-20T12:41:23.485 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.485+0000 7f4d567fc640 20 do_open: entering 2026-03-20T12:41:23.485 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.485+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.50 does not exist 2026-03-20T12:41:23.485 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.485+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.51 2026-03-20T12:41:23.485 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.485+0000 7f4d67182640 20 do_open: entering 2026-03-20T12:41:23.485 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.486+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.51 does not exist 2026-03-20T12:41:23.485 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.486+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.52 2026-03-20T12:41:23.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.486+0000 7f4d6517e640 20 do_open: entering 2026-03-20T12:41:23.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.486+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.52 does not exist 2026-03-20T12:41:23.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.486+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.53 2026-03-20T12:41:23.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.486+0000 7f4d6497d640 20 do_open: entering 2026-03-20T12:41:23.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.486+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.53 does not exist 2026-03-20T12:41:23.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.486+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.54 2026-03-20T12:41:23.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.486+0000 7f4d57fff640 20 do_open: entering 2026-03-20T12:41:23.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.486+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.54 does not exist 2026-03-20T12:41:23.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.486+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.55 2026-03-20T12:41:23.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.486+0000 7f4d6840b640 20 do_open: entering 2026-03-20T12:41:23.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.486+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.55 does not exist 2026-03-20T12:41:23.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.486+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.56 2026-03-20T12:41:23.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.487+0000 7f4d577fe640 20 do_open: entering 2026-03-20T12:41:23.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.487+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.56 does not exist 2026-03-20T12:41:23.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.487+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.57 2026-03-20T12:41:23.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.487+0000 7f4d56ffd640 20 do_open: entering 2026-03-20T12:41:23.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.487+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.57 does not exist 2026-03-20T12:41:23.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.487+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.58 2026-03-20T12:41:23.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.487+0000 7f4d567fc640 20 do_open: entering 2026-03-20T12:41:23.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.487+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.58 does not exist 2026-03-20T12:41:23.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.487+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.59 2026-03-20T12:41:23.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.487+0000 7f4d67182640 20 do_open: entering 2026-03-20T12:41:23.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.487+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.59 does not exist 2026-03-20T12:41:23.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.487+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.60 2026-03-20T12:41:23.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.487+0000 7f4d6517e640 20 do_open: entering 2026-03-20T12:41:23.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.488+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.60 does not exist 2026-03-20T12:41:23.487 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.488+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.61 2026-03-20T12:41:23.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.488+0000 7f4d6497d640 20 do_open: entering 2026-03-20T12:41:23.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.488+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.61 does not exist 2026-03-20T12:41:23.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.488+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.62 2026-03-20T12:41:23.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.488+0000 7f4d57fff640 20 do_open: entering 2026-03-20T12:41:23.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.488+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.62 does not exist 2026-03-20T12:41:23.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.488+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.63 2026-03-20T12:41:23.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.488+0000 7f4d6840b640 20 do_open: entering 2026-03-20T12:41:23.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.488+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.63 does not exist 2026-03-20T12:41:23.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.488+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.64 2026-03-20T12:41:23.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.488+0000 7f4d577fe640 20 do_open: entering 2026-03-20T12:41:23.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.488+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.64 does not exist 2026-03-20T12:41:23.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.488+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.65 2026-03-20T12:41:23.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.488+0000 7f4d56ffd640 20 do_open: entering 2026-03-20T12:41:23.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.489+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.65 does not exist 2026-03-20T12:41:23.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.489+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.66 2026-03-20T12:41:23.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.489+0000 7f4d567fc640 20 do_open: entering 2026-03-20T12:41:23.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.489+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.66 does not exist 2026-03-20T12:41:23.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.489+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.67 2026-03-20T12:41:23.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.489+0000 7f4d67182640 20 do_open: entering 2026-03-20T12:41:23.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.489+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.67 does not exist 2026-03-20T12:41:23.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.489+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.68 2026-03-20T12:41:23.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.489+0000 7f4d6517e640 20 do_open: entering 2026-03-20T12:41:23.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.489+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.68 does not exist 2026-03-20T12:41:23.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.489+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.69 2026-03-20T12:41:23.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.489+0000 7f4d6497d640 20 do_open: entering 2026-03-20T12:41:23.490 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.490+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.69 does not exist 2026-03-20T12:41:23.490 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.490+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.70 2026-03-20T12:41:23.490 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.490+0000 7f4d57fff640 20 do_open: entering 2026-03-20T12:41:23.490 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.490+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.70 does not exist 2026-03-20T12:41:23.490 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.490+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.71 2026-03-20T12:41:23.490 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.490+0000 7f4d6840b640 20 do_open: entering 2026-03-20T12:41:23.490 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.490+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.71 does not exist 2026-03-20T12:41:23.490 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.490+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.72 2026-03-20T12:41:23.490 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.490+0000 7f4d577fe640 20 do_open: entering 2026-03-20T12:41:23.490 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.490+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.72 does not exist 2026-03-20T12:41:23.490 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.490+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.73 2026-03-20T12:41:23.490 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.490+0000 7f4d56ffd640 20 do_open: entering 2026-03-20T12:41:23.491 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.491+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.73 does not exist 2026-03-20T12:41:23.491 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.491+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.74 2026-03-20T12:41:23.491 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.491+0000 7f4d567fc640 20 do_open: entering 2026-03-20T12:41:23.491 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.491+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.74 does not exist 2026-03-20T12:41:23.491 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.491+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.75 2026-03-20T12:41:23.491 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.491+0000 7f4d67182640 20 do_open: entering 2026-03-20T12:41:23.491 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.491+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.75 does not exist 2026-03-20T12:41:23.491 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.491+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.76 2026-03-20T12:41:23.491 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.491+0000 7f4d6517e640 20 do_open: entering 2026-03-20T12:41:23.491 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.492+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.76 does not exist 2026-03-20T12:41:23.491 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.492+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.77 2026-03-20T12:41:23.492 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.492+0000 7f4d6497d640 20 do_open: entering 2026-03-20T12:41:23.492 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.492+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.77 does not exist 2026-03-20T12:41:23.492 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.492+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.78 2026-03-20T12:41:23.492 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.492+0000 7f4d57fff640 20 do_open: entering 2026-03-20T12:41:23.492 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.492+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.78 does not exist 2026-03-20T12:41:23.492 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.492+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.79 2026-03-20T12:41:23.492 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.492+0000 7f4d6840b640 20 do_open: entering 2026-03-20T12:41:23.492 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.492+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.79 does not exist 2026-03-20T12:41:23.492 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.492+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.80 2026-03-20T12:41:23.492 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.492+0000 7f4d577fe640 20 do_open: entering 2026-03-20T12:41:23.493 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.493+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.80 does not exist 2026-03-20T12:41:23.493 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.493+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.81 2026-03-20T12:41:23.493 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.493+0000 7f4d56ffd640 20 do_open: entering 2026-03-20T12:41:23.493 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.493+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.81 does not exist 2026-03-20T12:41:23.493 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.493+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.82 2026-03-20T12:41:23.493 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.493+0000 7f4d567fc640 20 do_open: entering 2026-03-20T12:41:23.493 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.493+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.82 does not exist 2026-03-20T12:41:23.493 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.493+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.83 2026-03-20T12:41:23.493 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.493+0000 7f4d67182640 20 do_open: entering 2026-03-20T12:41:23.493 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.493+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.83 does not exist 2026-03-20T12:41:23.493 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.493+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.84 2026-03-20T12:41:23.493 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.493+0000 7f4d6517e640 20 do_open: entering 2026-03-20T12:41:23.493 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.494+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.84 does not exist 2026-03-20T12:41:23.494 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.494+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.85 2026-03-20T12:41:23.494 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.494+0000 7f4d6497d640 20 do_open: entering 2026-03-20T12:41:23.494 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.494+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.85 does not exist 2026-03-20T12:41:23.494 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.494+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.86 2026-03-20T12:41:23.494 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.494+0000 7f4d57fff640 20 do_open: entering 2026-03-20T12:41:23.494 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.494+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.86 does not exist 2026-03-20T12:41:23.494 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.494+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.87 2026-03-20T12:41:23.494 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.494+0000 7f4d6840b640 20 do_open: entering 2026-03-20T12:41:23.494 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.494+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.87 does not exist 2026-03-20T12:41:23.494 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.494+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.88 2026-03-20T12:41:23.494 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.494+0000 7f4d577fe640 20 do_open: entering 2026-03-20T12:41:23.495 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.495+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.88 does not exist 2026-03-20T12:41:23.495 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.495+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.89 2026-03-20T12:41:23.495 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.495+0000 7f4d56ffd640 20 do_open: entering 2026-03-20T12:41:23.495 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.495+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.89 does not exist 2026-03-20T12:41:23.495 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.495+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.90 2026-03-20T12:41:23.495 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.495+0000 7f4d567fc640 20 do_open: entering 2026-03-20T12:41:23.495 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.495+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.90 does not exist 2026-03-20T12:41:23.495 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.495+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.91 2026-03-20T12:41:23.495 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.495+0000 7f4d67182640 20 do_open: entering 2026-03-20T12:41:23.495 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.495+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.91 does not exist 2026-03-20T12:41:23.495 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.495+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.92 2026-03-20T12:41:23.495 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.495+0000 7f4d6517e640 20 do_open: entering 2026-03-20T12:41:23.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.496+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.92 does not exist 2026-03-20T12:41:23.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.496+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.93 2026-03-20T12:41:23.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.496+0000 7f4d6497d640 20 do_open: entering 2026-03-20T12:41:23.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.496+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.93 does not exist 2026-03-20T12:41:23.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.496+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.94 2026-03-20T12:41:23.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.496+0000 7f4d57fff640 20 do_open: entering 2026-03-20T12:41:23.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.496+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.94 does not exist 2026-03-20T12:41:23.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.496+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.95 2026-03-20T12:41:23.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.496+0000 7f4d6840b640 20 do_open: entering 2026-03-20T12:41:23.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.496+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.95 does not exist 2026-03-20T12:41:23.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.497+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.96 2026-03-20T12:41:23.496 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.497+0000 7f4d577fe640 20 do_open: entering 2026-03-20T12:41:23.497 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.497+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.96 does not exist 2026-03-20T12:41:23.497 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.497+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.97 2026-03-20T12:41:23.497 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.497+0000 7f4d56ffd640 20 do_open: entering 2026-03-20T12:41:23.497 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.497+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.97 does not exist 2026-03-20T12:41:23.497 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.497+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.98 2026-03-20T12:41:23.497 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.497+0000 7f4d567fc640 20 do_open: entering 2026-03-20T12:41:23.497 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.497+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.98 does not exist 2026-03-20T12:41:23.497 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.497+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.99 2026-03-20T12:41:23.497 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.497+0000 7f4d67182640 20 do_open: entering 2026-03-20T12:41:23.497 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.498+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.99 does not exist 2026-03-20T12:41:23.497 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.498+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.100 2026-03-20T12:41:23.497 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.498+0000 7f4d6517e640 20 do_open: entering 2026-03-20T12:41:23.498 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.498+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.100 does not exist 2026-03-20T12:41:23.498 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.498+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.101 2026-03-20T12:41:23.498 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.498+0000 7f4d6497d640 20 do_open: entering 2026-03-20T12:41:23.498 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.498+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.101 does not exist 2026-03-20T12:41:23.498 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.498+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.102 2026-03-20T12:41:23.498 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.498+0000 7f4d57fff640 20 do_open: entering 2026-03-20T12:41:23.498 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.498+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.102 does not exist 2026-03-20T12:41:23.498 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.498+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.103 2026-03-20T12:41:23.498 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.498+0000 7f4d6840b640 20 do_open: entering 2026-03-20T12:41:23.498 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.499+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.103 does not exist 2026-03-20T12:41:23.498 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.499+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.104 2026-03-20T12:41:23.498 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.499+0000 7f4d577fe640 20 do_open: entering 2026-03-20T12:41:23.499 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.499+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.104 does not exist 2026-03-20T12:41:23.499 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.499+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.105 2026-03-20T12:41:23.499 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.499+0000 7f4d56ffd640 20 do_open: entering 2026-03-20T12:41:23.499 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.499+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.105 does not exist 2026-03-20T12:41:23.499 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.499+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.106 2026-03-20T12:41:23.499 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.499+0000 7f4d567fc640 20 do_open: entering 2026-03-20T12:41:23.499 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.499+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.106 does not exist 2026-03-20T12:41:23.499 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.499+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.107 2026-03-20T12:41:23.499 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.499+0000 7f4d67182640 20 do_open: entering 2026-03-20T12:41:23.499 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.500+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.107 does not exist 2026-03-20T12:41:23.499 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.500+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.108 2026-03-20T12:41:23.499 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.500+0000 7f4d6517e640 20 do_open: entering 2026-03-20T12:41:23.500 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.500+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.108 does not exist 2026-03-20T12:41:23.500 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.500+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.109 2026-03-20T12:41:23.500 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.500+0000 7f4d6497d640 20 do_open: entering 2026-03-20T12:41:23.500 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.500+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.109 does not exist 2026-03-20T12:41:23.500 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.500+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.110 2026-03-20T12:41:23.500 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.500+0000 7f4d57fff640 20 do_open: entering 2026-03-20T12:41:23.500 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.500+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.110 does not exist 2026-03-20T12:41:23.500 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.500+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.111 2026-03-20T12:41:23.500 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.500+0000 7f4d6840b640 20 do_open: entering 2026-03-20T12:41:23.500 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.500+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.111 does not exist 2026-03-20T12:41:23.500 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.500+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.112 2026-03-20T12:41:23.500 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.500+0000 7f4d577fe640 20 do_open: entering 2026-03-20T12:41:23.501 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.501+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.112 does not exist 2026-03-20T12:41:23.501 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.501+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.113 2026-03-20T12:41:23.501 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.501+0000 7f4d56ffd640 20 do_open: entering 2026-03-20T12:41:23.501 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.501+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.113 does not exist 2026-03-20T12:41:23.501 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.501+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.114 2026-03-20T12:41:23.501 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.501+0000 7f4d567fc640 20 do_open: entering 2026-03-20T12:41:23.501 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.501+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.114 does not exist 2026-03-20T12:41:23.501 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.501+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.115 2026-03-20T12:41:23.501 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.501+0000 7f4d67182640 20 do_open: entering 2026-03-20T12:41:23.502 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.501+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.115 does not exist 2026-03-20T12:41:23.502 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.501+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.116 2026-03-20T12:41:23.502 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.501+0000 7f4d6517e640 20 do_open: entering 2026-03-20T12:41:23.502 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.502+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.116 does not exist 2026-03-20T12:41:23.502 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.502+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.117 2026-03-20T12:41:23.502 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.502+0000 7f4d6497d640 20 do_open: entering 2026-03-20T12:41:23.502 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.502+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.117 does not exist 2026-03-20T12:41:23.502 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.502+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.118 2026-03-20T12:41:23.502 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.502+0000 7f4d57fff640 20 do_open: entering 2026-03-20T12:41:23.502 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.502+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.118 does not exist 2026-03-20T12:41:23.502 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.502+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.119 2026-03-20T12:41:23.502 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.502+0000 7f4d6840b640 20 do_open: entering 2026-03-20T12:41:23.502 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.502+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.119 does not exist 2026-03-20T12:41:23.502 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.502+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.120 2026-03-20T12:41:23.502 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.502+0000 7f4d577fe640 20 do_open: entering 2026-03-20T12:41:23.502 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.503+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.120 does not exist 2026-03-20T12:41:23.502 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.503+0000 7f4d56ffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.121 2026-03-20T12:41:23.502 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.503+0000 7f4d56ffd640 20 do_open: entering 2026-03-20T12:41:23.503 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.503+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.121 does not exist 2026-03-20T12:41:23.503 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.503+0000 7f4d567fc640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.122 2026-03-20T12:41:23.503 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.503+0000 7f4d567fc640 20 do_open: entering 2026-03-20T12:41:23.503 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.503+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.122 does not exist 2026-03-20T12:41:23.503 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.503+0000 7f4d67182640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.123 2026-03-20T12:41:23.503 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.503+0000 7f4d67182640 20 do_open: entering 2026-03-20T12:41:23.503 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.503+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.123 does not exist 2026-03-20T12:41:23.503 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.503+0000 7f4d6517e640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.124 2026-03-20T12:41:23.503 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.503+0000 7f4d6517e640 20 do_open: entering 2026-03-20T12:41:23.504 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.504+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.124 does not exist 2026-03-20T12:41:23.504 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.504+0000 7f4d6497d640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.125 2026-03-20T12:41:23.504 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.504+0000 7f4d6497d640 20 do_open: entering 2026-03-20T12:41:23.504 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.504+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.125 does not exist 2026-03-20T12:41:23.504 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.504+0000 7f4d57fff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.126 2026-03-20T12:41:23.504 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.504+0000 7f4d57fff640 20 do_open: entering 2026-03-20T12:41:23.504 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.504+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.126 does not exist 2026-03-20T12:41:23.504 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.504+0000 7f4d6840b640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.127 2026-03-20T12:41:23.504 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.504+0000 7f4d6840b640 20 do_open: entering 2026-03-20T12:41:23.504 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.504+0000 7f4d577fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.127 does not exist 2026-03-20T12:41:23.504 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.504+0000 7f4d577fe640 20 do_create: entering 2026-03-20T12:41:23.506 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.506+0000 7f4d56ffd640 20 do_open: entering 2026-03-20T12:41:23.509 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.509+0000 7f4d6b3d0900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T12:41:23.509 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:23.509+0000 7f4d6b3d0900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T12:41:26.476 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.476+0000 7f4d6b3d0900 10 rgw_init_ioctx warning: failed to set recovery_priority on default.rgw.meta 2026-03-20T12:41:26.476 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.476+0000 7f4d6b3d0900 5 note: GC not initialized 2026-03-20T12:41:26.477 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.476+0000 7f4ccbfff640 20 reqs_thread_entry: start 2026-03-20T12:41:26.536 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.536+0000 7f4d6b3d0900 20 init_complete bucket index max shards: 11 2026-03-20T12:41:26.536 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.536+0000 7f4d6b3d0900 20 Filter name: none 2026-03-20T12:41:26.536 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.536+0000 7f4cc9ffb640 20 reqs_thread_entry: start 2026-03-20T12:41:26.550 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.550+0000 7f4d6b3d0900 20 remove_watcher() i=0 2026-03-20T12:41:26.550 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.550+0000 7f4d6b3d0900 2 removed watcher, disabling cache 2026-03-20T12:41:26.550 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.550+0000 7f4d6b3d0900 20 remove_watcher() i=7 2026-03-20T12:41:26.550 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.550+0000 7f4d6b3d0900 20 remove_watcher() i=3 2026-03-20T12:41:26.551 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.551+0000 7f4d6b3d0900 20 remove_watcher() i=6 2026-03-20T12:41:26.551 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.551+0000 7f4d6b3d0900 20 remove_watcher() i=2 2026-03-20T12:41:26.551 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.551+0000 7f4d6b3d0900 20 remove_watcher() i=5 2026-03-20T12:41:26.551 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.551+0000 7f4d6b3d0900 20 remove_watcher() i=4 2026-03-20T12:41:26.551 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.551+0000 7f4d6b3d0900 20 remove_watcher() i=1 2026-03-20T12:41:26.557 INFO:teuthology.orchestra.run.vm00.stdout:[] 2026-03-20T12:41:26.557 DEBUG:tasks.util.rgw: json result: [] 2026-03-20T12:41:26.557 INFO:tasks.rgw:Configuring storage class = FROZEN 2026-03-20T12:41:26.557 INFO:tasks.util.rgw:rgwadmin: client.0 : ['zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN'] 2026-03-20T12:41:26.557 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.0', '--cluster', 'ceph', 'zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN'] 2026-03-20T12:41:26.557 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph zonegroup placement add --rgw-zone default --placement-id default-placement --storage-class FROZEN 2026-03-20T12:41:26.634 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-03-20T12:41:26.635 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T12:41:26.647 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.647+0000 7faaa2552900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T12:41:26.647 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.647+0000 7faaa2552900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T12:41:26.647 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.647+0000 7faa4e7e4640 20 reqs_thread_entry: start 2026-03-20T12:41:26.654 INFO:teuthology.orchestra.run.vm00.stdout:[{"key":"default-placement","val":{"name":"default-placement","tags":[],"storage_classes":["FROZEN","STANDARD"]}}] 2026-03-20T12:41:26.654 DEBUG:tasks.util.rgw: json result: [{'key': 'default-placement', 'val': {'name': 'default-placement', 'tags': [], 'storage_classes': ['FROZEN', 'STANDARD']}}] 2026-03-20T12:41:26.654 INFO:tasks.util.rgw:rgwadmin: client.0 : ['zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN', '--data-pool', 'default.rgw.buckets.data.frozen'] 2026-03-20T12:41:26.654 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.0', '--cluster', 'ceph', 'zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN', '--data-pool', 'default.rgw.buckets.data.frozen'] 2026-03-20T12:41:26.655 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph zone placement add --rgw-zone default --placement-id default-placement --storage-class FROZEN --data-pool default.rgw.buckets.data.frozen 2026-03-20T12:41:26.734 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-03-20T12:41:26.734 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T12:41:26.747 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.747+0000 7f16b95cb900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T12:41:26.747 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.747+0000 7f16b95cb900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T12:41:26.747 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.747+0000 7f16627e4640 20 reqs_thread_entry: start 2026-03-20T12:41:26.758 INFO:teuthology.orchestra.run.vm00.stdout:{"id":"39159d26-247c-45da-824e-10bd55c6de4d","name":"default","domain_root":"default.rgw.meta:root","control_pool":"default.rgw.control","dedup_pool":"default.rgw.dedup","gc_pool":"default.rgw.log:gc","lc_pool":"default.rgw.log:lc","log_pool":"default.rgw.log","intent_log_pool":"default.rgw.log:intent","usage_log_pool":"default.rgw.log:usage","roles_pool":"default.rgw.meta:roles","reshard_pool":"default.rgw.log:reshard","user_keys_pool":"default.rgw.meta:users.keys","user_email_pool":"default.rgw.meta:users.email","user_swift_pool":"default.rgw.meta:users.swift","user_uid_pool":"default.rgw.meta:users.uid","otp_pool":"default.rgw.otp","notif_pool":"default.rgw.log:notif","topics_pool":"default.rgw.meta:topics","account_pool":"default.rgw.meta:accounts","group_pool":"default.rgw.meta:groups","system_key":{"access_key":"","secret_key":""},"placement_pools":[{"key":"default-placement","val":{"index_pool":"default.rgw.buckets.index","storage_classes":{"FROZEN":{"data_pool":"default.rgw.buckets.data.frozen"},"STANDARD":{"data_pool":"default.rgw.buckets.data"}},"data_extra_pool":"default.rgw.buckets.non-ec","index_type":0,"inline_data":true}}],"realm_id":"","restore_pool":"default.rgw.log:restore"} 2026-03-20T12:41:26.758 DEBUG:tasks.util.rgw: json result: {'id': '39159d26-247c-45da-824e-10bd55c6de4d', 'name': 'default', 'domain_root': 'default.rgw.meta:root', 'control_pool': 'default.rgw.control', 'dedup_pool': 'default.rgw.dedup', 'gc_pool': 'default.rgw.log:gc', 'lc_pool': 'default.rgw.log:lc', 'log_pool': 'default.rgw.log', 'intent_log_pool': 'default.rgw.log:intent', 'usage_log_pool': 'default.rgw.log:usage', 'roles_pool': 'default.rgw.meta:roles', 'reshard_pool': 'default.rgw.log:reshard', 'user_keys_pool': 'default.rgw.meta:users.keys', 'user_email_pool': 'default.rgw.meta:users.email', 'user_swift_pool': 'default.rgw.meta:users.swift', 'user_uid_pool': 'default.rgw.meta:users.uid', 'otp_pool': 'default.rgw.otp', 'notif_pool': 'default.rgw.log:notif', 'topics_pool': 'default.rgw.meta:topics', 'account_pool': 'default.rgw.meta:accounts', 'group_pool': 'default.rgw.meta:groups', 'system_key': {'access_key': '', 'secret_key': ''}, 'placement_pools': [{'key': 'default-placement', 'val': {'index_pool': 'default.rgw.buckets.index', 'storage_classes': {'FROZEN': {'data_pool': 'default.rgw.buckets.data.frozen'}, 'STANDARD': {'data_pool': 'default.rgw.buckets.data'}}, 'data_extra_pool': 'default.rgw.buckets.non-ec', 'index_type': 0, 'inline_data': True}}], 'realm_id': '', 'restore_pool': 'default.rgw.log:restore'} 2026-03-20T12:41:26.759 INFO:tasks.rgw:Configuring storage class = LUKEWARM 2026-03-20T12:41:26.759 INFO:tasks.util.rgw:rgwadmin: client.0 : ['zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM'] 2026-03-20T12:41:26.759 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.0', '--cluster', 'ceph', 'zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM'] 2026-03-20T12:41:26.759 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph zonegroup placement add --rgw-zone default --placement-id default-placement --storage-class LUKEWARM 2026-03-20T12:41:26.837 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-03-20T12:41:26.838 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T12:41:26.851 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.851+0000 7f1108214900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T12:41:26.852 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.851+0000 7f1108214900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T12:41:26.852 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.851+0000 7f10b1fe3640 20 reqs_thread_entry: start 2026-03-20T12:41:26.861 INFO:teuthology.orchestra.run.vm00.stdout:[{"key":"default-placement","val":{"name":"default-placement","tags":[],"storage_classes":["FROZEN","LUKEWARM","STANDARD"]}}] 2026-03-20T12:41:26.861 DEBUG:tasks.util.rgw: json result: [{'key': 'default-placement', 'val': {'name': 'default-placement', 'tags': [], 'storage_classes': ['FROZEN', 'LUKEWARM', 'STANDARD']}}] 2026-03-20T12:41:26.861 INFO:tasks.util.rgw:rgwadmin: client.0 : ['zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM', '--data-pool', 'default.rgw.buckets.data.lukewarm'] 2026-03-20T12:41:26.861 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.0', '--cluster', 'ceph', 'zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM', '--data-pool', 'default.rgw.buckets.data.lukewarm'] 2026-03-20T12:41:26.861 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph zone placement add --rgw-zone default --placement-id default-placement --storage-class LUKEWARM --data-pool default.rgw.buckets.data.lukewarm 2026-03-20T12:41:26.950 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-03-20T12:41:26.950 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T12:41:26.966 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.965+0000 7f30a731f900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T12:41:26.966 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.965+0000 7f30a731f900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T12:41:26.966 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:41:26.966+0000 7f30517e2640 20 reqs_thread_entry: start 2026-03-20T12:41:26.986 INFO:teuthology.orchestra.run.vm00.stdout:{"id":"39159d26-247c-45da-824e-10bd55c6de4d","name":"default","domain_root":"default.rgw.meta:root","control_pool":"default.rgw.control","dedup_pool":"default.rgw.dedup","gc_pool":"default.rgw.log:gc","lc_pool":"default.rgw.log:lc","log_pool":"default.rgw.log","intent_log_pool":"default.rgw.log:intent","usage_log_pool":"default.rgw.log:usage","roles_pool":"default.rgw.meta:roles","reshard_pool":"default.rgw.log:reshard","user_keys_pool":"default.rgw.meta:users.keys","user_email_pool":"default.rgw.meta:users.email","user_swift_pool":"default.rgw.meta:users.swift","user_uid_pool":"default.rgw.meta:users.uid","otp_pool":"default.rgw.otp","notif_pool":"default.rgw.log:notif","topics_pool":"default.rgw.meta:topics","account_pool":"default.rgw.meta:accounts","group_pool":"default.rgw.meta:groups","system_key":{"access_key":"","secret_key":""},"placement_pools":[{"key":"default-placement","val":{"index_pool":"default.rgw.buckets.index","storage_classes":{"FROZEN":{"data_pool":"default.rgw.buckets.data.frozen"},"LUKEWARM":{"data_pool":"default.rgw.buckets.data.lukewarm"},"STANDARD":{"data_pool":"default.rgw.buckets.data"}},"data_extra_pool":"default.rgw.buckets.non-ec","index_type":0,"inline_data":true}}],"realm_id":"","restore_pool":"default.rgw.log:restore"} 2026-03-20T12:41:26.986 DEBUG:tasks.util.rgw: json result: {'id': '39159d26-247c-45da-824e-10bd55c6de4d', 'name': 'default', 'domain_root': 'default.rgw.meta:root', 'control_pool': 'default.rgw.control', 'dedup_pool': 'default.rgw.dedup', 'gc_pool': 'default.rgw.log:gc', 'lc_pool': 'default.rgw.log:lc', 'log_pool': 'default.rgw.log', 'intent_log_pool': 'default.rgw.log:intent', 'usage_log_pool': 'default.rgw.log:usage', 'roles_pool': 'default.rgw.meta:roles', 'reshard_pool': 'default.rgw.log:reshard', 'user_keys_pool': 'default.rgw.meta:users.keys', 'user_email_pool': 'default.rgw.meta:users.email', 'user_swift_pool': 'default.rgw.meta:users.swift', 'user_uid_pool': 'default.rgw.meta:users.uid', 'otp_pool': 'default.rgw.otp', 'notif_pool': 'default.rgw.log:notif', 'topics_pool': 'default.rgw.meta:topics', 'account_pool': 'default.rgw.meta:accounts', 'group_pool': 'default.rgw.meta:groups', 'system_key': {'access_key': '', 'secret_key': ''}, 'placement_pools': [{'key': 'default-placement', 'val': {'index_pool': 'default.rgw.buckets.index', 'storage_classes': {'FROZEN': {'data_pool': 'default.rgw.buckets.data.frozen'}, 'LUKEWARM': {'data_pool': 'default.rgw.buckets.data.lukewarm'}, 'STANDARD': {'data_pool': 'default.rgw.buckets.data'}}, 'data_extra_pool': 'default.rgw.buckets.non-ec', 'index_type': 0, 'inline_data': True}}], 'realm_id': '', 'restore_pool': 'default.rgw.log:restore'} 2026-03-20T12:41:26.986 INFO:tasks.util.rgw:rgwadmin: client.1 : ['user', 'list'] 2026-03-20T12:41:26.986 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.1', '--cluster', 'ceph', 'user', 'list'] 2026-03-20T12:41:26.986 DEBUG:teuthology.orchestra.run.vm06:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.1 --cluster ceph user list 2026-03-20T12:41:27.024 INFO:teuthology.orchestra.run.vm06.stderr:ignoring --setuser ceph since I am not root 2026-03-20T12:41:27.024 INFO:teuthology.orchestra.run.vm06.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T12:41:27.042 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.042+0000 7fc269c10900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.043 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.044+0000 7fc269c10900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:27.044 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.044+0000 7fc269c10900 20 realm 2026-03-20T12:41:27.044 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.044+0000 7fc269c10900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.044 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.044+0000 7fc269c10900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:27.044 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.044+0000 7fc269c10900 4 RGWPeriod::init failed to init realm id : (2) No such file or directory 2026-03-20T12:41:27.044 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.044+0000 7fc269c10900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.044 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.044+0000 7fc269c10900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:27.044 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.044+0000 7fc269c10900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.044 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.045+0000 7fc269c10900 20 rados_obj.operate() r=0 bl.length=46 2026-03-20T12:41:27.045 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.045+0000 7fc269c10900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.045 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.045+0000 7fc269c10900 20 rados_obj.operate() r=0 bl.length=1190 2026-03-20T12:41:27.045 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.045+0000 7fc269c10900 20 searching for the correct realm 2026-03-20T12:41:27.060 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.060+0000 7fc269c10900 20 RGWRados::pool_iterate: got zone_info.39159d26-247c-45da-824e-10bd55c6de4d 2026-03-20T12:41:27.060 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.060+0000 7fc269c10900 20 RGWRados::pool_iterate: got default.zonegroup. 2026-03-20T12:41:27.060 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.060+0000 7fc269c10900 20 RGWRados::pool_iterate: got default.zone. 2026-03-20T12:41:27.060 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.060+0000 7fc269c10900 20 RGWRados::pool_iterate: got zone_names.default 2026-03-20T12:41:27.060 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.060+0000 7fc269c10900 20 RGWRados::pool_iterate: got zonegroups_names.default 2026-03-20T12:41:27.060 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.060+0000 7fc269c10900 20 RGWRados::pool_iterate: got zonegroup_info.9626b2cd-be7f-4e66-a24c-00fdcd8682d7 2026-03-20T12:41:27.060 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.060+0000 7fc269c10900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.060 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.061+0000 7fc269c10900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:27.060 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.061+0000 7fc269c10900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.061 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.061+0000 7fc269c10900 20 rados_obj.operate() r=0 bl.length=46 2026-03-20T12:41:27.061 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.061+0000 7fc269c10900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.061 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.061+0000 7fc269c10900 20 rados_obj.operate() r=0 bl.length=470 2026-03-20T12:41:27.061 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.061+0000 7fc269c10900 20 zone default found 2026-03-20T12:41:27.061 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.061+0000 7fc269c10900 4 Realm: () 2026-03-20T12:41:27.061 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.061+0000 7fc269c10900 4 ZoneGroup: default (9626b2cd-be7f-4e66-a24c-00fdcd8682d7) 2026-03-20T12:41:27.061 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.061+0000 7fc269c10900 4 Zone: default (39159d26-247c-45da-824e-10bd55c6de4d) 2026-03-20T12:41:27.061 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.061+0000 7fc269c10900 10 cannot find current period zonegroup using local zonegroup configuration 2026-03-20T12:41:27.061 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.061+0000 7fc269c10900 20 zonegroup default 2026-03-20T12:41:27.061 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.061+0000 7fc269c10900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.061 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.061+0000 7fc269c10900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:27.061 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.061+0000 7fc269c10900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.061 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.062+0000 7fc269c10900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:27.061 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.062+0000 7fc269c10900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.061 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.062+0000 7fc269c10900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:27.062 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.062+0000 7fc269c10900 20 started sync module instance, tier type = 2026-03-20T12:41:27.062 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.062+0000 7fc269c10900 20 started zone id=39159d26-247c-45da-824e-10bd55c6de4d (name=default) with tier type = 2026-03-20T12:41:27.065 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.065+0000 7fc269c10900 20 add_watcher() i=4 2026-03-20T12:41:27.065 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.065+0000 7fc269c10900 20 add_watcher() i=2 2026-03-20T12:41:27.065 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.066+0000 7fc269c10900 20 add_watcher() i=1 2026-03-20T12:41:27.065 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.066+0000 7fc269c10900 20 add_watcher() i=0 2026-03-20T12:41:27.066 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.066+0000 7fc269c10900 20 add_watcher() i=3 2026-03-20T12:41:27.066 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.067+0000 7fc269c10900 20 add_watcher() i=5 2026-03-20T12:41:27.067 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.067+0000 7fc269c10900 20 add_watcher() i=7 2026-03-20T12:41:27.067 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.067+0000 7fc269c10900 20 add_watcher() i=6 2026-03-20T12:41:27.067 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.067+0000 7fc269c10900 2 all 8 watchers are set, enabling cache 2026-03-20T12:41:27.069 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.070+0000 7fc269c10900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T12:41:27.069 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.070+0000 7fc269c10900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T12:41:27.069 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.070+0000 7fc269c10900 5 note: GC not initialized 2026-03-20T12:41:27.069 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.070+0000 7fc212fe5640 20 reqs_thread_entry: start 2026-03-20T12:41:27.114 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.114+0000 7fc269c10900 20 init_complete bucket index max shards: 11 2026-03-20T12:41:27.114 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.114+0000 7fc269c10900 20 Filter name: none 2026-03-20T12:41:27.114 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.114+0000 7fc210fe1640 20 reqs_thread_entry: start 2026-03-20T12:41:27.123 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.124+0000 7fc269c10900 20 remove_watcher() i=0 2026-03-20T12:41:27.123 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.124+0000 7fc269c10900 2 removed watcher, disabling cache 2026-03-20T12:41:27.123 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.124+0000 7fc269c10900 20 remove_watcher() i=3 2026-03-20T12:41:27.124 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.124+0000 7fc269c10900 20 remove_watcher() i=2 2026-03-20T12:41:27.124 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.124+0000 7fc269c10900 20 remove_watcher() i=6 2026-03-20T12:41:27.124 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.124+0000 7fc269c10900 20 remove_watcher() i=5 2026-03-20T12:41:27.124 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.124+0000 7fc269c10900 20 remove_watcher() i=1 2026-03-20T12:41:27.124 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.124+0000 7fc269c10900 20 remove_watcher() i=7 2026-03-20T12:41:27.124 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.124+0000 7fc269c10900 20 remove_watcher() i=4 2026-03-20T12:41:27.130 INFO:teuthology.orchestra.run.vm06.stdout:[] 2026-03-20T12:41:27.131 DEBUG:tasks.util.rgw: json result: [] 2026-03-20T12:41:27.131 INFO:tasks.rgw:Configuring storage class = FROZEN 2026-03-20T12:41:27.131 INFO:tasks.util.rgw:rgwadmin: client.1 : ['zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN'] 2026-03-20T12:41:27.131 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.1', '--cluster', 'ceph', 'zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN'] 2026-03-20T12:41:27.131 DEBUG:teuthology.orchestra.run.vm06:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.1 --cluster ceph zonegroup placement add --rgw-zone default --placement-id default-placement --storage-class FROZEN 2026-03-20T12:41:27.211 INFO:teuthology.orchestra.run.vm06.stderr:ignoring --setuser ceph since I am not root 2026-03-20T12:41:27.211 INFO:teuthology.orchestra.run.vm06.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T12:41:27.226 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.227+0000 7fd892d28900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T12:41:27.227 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.227+0000 7fd892d28900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T12:41:27.227 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.227+0000 7fd83dfe3640 20 reqs_thread_entry: start 2026-03-20T12:41:27.236 INFO:teuthology.orchestra.run.vm06.stdout:[{"key":"default-placement","val":{"name":"default-placement","tags":[],"storage_classes":["FROZEN","LUKEWARM","STANDARD"]}}] 2026-03-20T12:41:27.236 DEBUG:tasks.util.rgw: json result: [{'key': 'default-placement', 'val': {'name': 'default-placement', 'tags': [], 'storage_classes': ['FROZEN', 'LUKEWARM', 'STANDARD']}}] 2026-03-20T12:41:27.236 INFO:tasks.util.rgw:rgwadmin: client.1 : ['zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN', '--data-pool', 'default.rgw.buckets.data.frozen'] 2026-03-20T12:41:27.236 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.1', '--cluster', 'ceph', 'zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN', '--data-pool', 'default.rgw.buckets.data.frozen'] 2026-03-20T12:41:27.237 DEBUG:teuthology.orchestra.run.vm06:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.1 --cluster ceph zone placement add --rgw-zone default --placement-id default-placement --storage-class FROZEN --data-pool default.rgw.buckets.data.frozen 2026-03-20T12:41:27.318 INFO:teuthology.orchestra.run.vm06.stderr:ignoring --setuser ceph since I am not root 2026-03-20T12:41:27.318 INFO:teuthology.orchestra.run.vm06.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T12:41:27.332 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.332+0000 7f280e725900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T12:41:27.333 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.332+0000 7f280e725900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T12:41:27.333 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.332+0000 7f27b8fe1640 20 reqs_thread_entry: start 2026-03-20T12:41:27.342 INFO:teuthology.orchestra.run.vm06.stdout:{"id":"39159d26-247c-45da-824e-10bd55c6de4d","name":"default","domain_root":"default.rgw.meta:root","control_pool":"default.rgw.control","dedup_pool":"default.rgw.dedup","gc_pool":"default.rgw.log:gc","lc_pool":"default.rgw.log:lc","log_pool":"default.rgw.log","intent_log_pool":"default.rgw.log:intent","usage_log_pool":"default.rgw.log:usage","roles_pool":"default.rgw.meta:roles","reshard_pool":"default.rgw.log:reshard","user_keys_pool":"default.rgw.meta:users.keys","user_email_pool":"default.rgw.meta:users.email","user_swift_pool":"default.rgw.meta:users.swift","user_uid_pool":"default.rgw.meta:users.uid","otp_pool":"default.rgw.otp","notif_pool":"default.rgw.log:notif","topics_pool":"default.rgw.meta:topics","account_pool":"default.rgw.meta:accounts","group_pool":"default.rgw.meta:groups","system_key":{"access_key":"","secret_key":""},"placement_pools":[{"key":"default-placement","val":{"index_pool":"default.rgw.buckets.index","storage_classes":{"FROZEN":{"data_pool":"default.rgw.buckets.data.frozen"},"LUKEWARM":{"data_pool":"default.rgw.buckets.data.lukewarm"},"STANDARD":{"data_pool":"default.rgw.buckets.data"}},"data_extra_pool":"default.rgw.buckets.non-ec","index_type":0,"inline_data":true}}],"realm_id":"","restore_pool":"default.rgw.log:restore"} 2026-03-20T12:41:27.343 DEBUG:tasks.util.rgw: json result: {'id': '39159d26-247c-45da-824e-10bd55c6de4d', 'name': 'default', 'domain_root': 'default.rgw.meta:root', 'control_pool': 'default.rgw.control', 'dedup_pool': 'default.rgw.dedup', 'gc_pool': 'default.rgw.log:gc', 'lc_pool': 'default.rgw.log:lc', 'log_pool': 'default.rgw.log', 'intent_log_pool': 'default.rgw.log:intent', 'usage_log_pool': 'default.rgw.log:usage', 'roles_pool': 'default.rgw.meta:roles', 'reshard_pool': 'default.rgw.log:reshard', 'user_keys_pool': 'default.rgw.meta:users.keys', 'user_email_pool': 'default.rgw.meta:users.email', 'user_swift_pool': 'default.rgw.meta:users.swift', 'user_uid_pool': 'default.rgw.meta:users.uid', 'otp_pool': 'default.rgw.otp', 'notif_pool': 'default.rgw.log:notif', 'topics_pool': 'default.rgw.meta:topics', 'account_pool': 'default.rgw.meta:accounts', 'group_pool': 'default.rgw.meta:groups', 'system_key': {'access_key': '', 'secret_key': ''}, 'placement_pools': [{'key': 'default-placement', 'val': {'index_pool': 'default.rgw.buckets.index', 'storage_classes': {'FROZEN': {'data_pool': 'default.rgw.buckets.data.frozen'}, 'LUKEWARM': {'data_pool': 'default.rgw.buckets.data.lukewarm'}, 'STANDARD': {'data_pool': 'default.rgw.buckets.data'}}, 'data_extra_pool': 'default.rgw.buckets.non-ec', 'index_type': 0, 'inline_data': True}}], 'realm_id': '', 'restore_pool': 'default.rgw.log:restore'} 2026-03-20T12:41:27.343 INFO:tasks.rgw:Configuring storage class = LUKEWARM 2026-03-20T12:41:27.343 INFO:tasks.util.rgw:rgwadmin: client.1 : ['zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM'] 2026-03-20T12:41:27.343 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.1', '--cluster', 'ceph', 'zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM'] 2026-03-20T12:41:27.343 DEBUG:teuthology.orchestra.run.vm06:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.1 --cluster ceph zonegroup placement add --rgw-zone default --placement-id default-placement --storage-class LUKEWARM 2026-03-20T12:41:27.420 INFO:teuthology.orchestra.run.vm06.stderr:ignoring --setuser ceph since I am not root 2026-03-20T12:41:27.420 INFO:teuthology.orchestra.run.vm06.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T12:41:27.439 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.439+0000 7f193be20900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T12:41:27.440 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.439+0000 7f193be20900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T12:41:27.440 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.439+0000 7f18e57e2640 20 reqs_thread_entry: start 2026-03-20T12:41:27.451 INFO:teuthology.orchestra.run.vm06.stdout:[{"key":"default-placement","val":{"name":"default-placement","tags":[],"storage_classes":["FROZEN","LUKEWARM","STANDARD"]}}] 2026-03-20T12:41:27.452 DEBUG:tasks.util.rgw: json result: [{'key': 'default-placement', 'val': {'name': 'default-placement', 'tags': [], 'storage_classes': ['FROZEN', 'LUKEWARM', 'STANDARD']}}] 2026-03-20T12:41:27.452 INFO:tasks.util.rgw:rgwadmin: client.1 : ['zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM', '--data-pool', 'default.rgw.buckets.data.lukewarm'] 2026-03-20T12:41:27.452 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.1', '--cluster', 'ceph', 'zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM', '--data-pool', 'default.rgw.buckets.data.lukewarm'] 2026-03-20T12:41:27.452 DEBUG:teuthology.orchestra.run.vm06:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.1 --cluster ceph zone placement add --rgw-zone default --placement-id default-placement --storage-class LUKEWARM --data-pool default.rgw.buckets.data.lukewarm 2026-03-20T12:41:27.536 INFO:teuthology.orchestra.run.vm06.stderr:ignoring --setuser ceph since I am not root 2026-03-20T12:41:27.536 INFO:teuthology.orchestra.run.vm06.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T12:41:27.552 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.552+0000 7f4a90a10900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T12:41:27.552 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.552+0000 7f4a90a10900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T12:41:27.552 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-20T12:41:27.552+0000 7f4a3afdd640 20 reqs_thread_entry: start 2026-03-20T12:41:27.562 INFO:teuthology.orchestra.run.vm06.stdout:{"id":"39159d26-247c-45da-824e-10bd55c6de4d","name":"default","domain_root":"default.rgw.meta:root","control_pool":"default.rgw.control","dedup_pool":"default.rgw.dedup","gc_pool":"default.rgw.log:gc","lc_pool":"default.rgw.log:lc","log_pool":"default.rgw.log","intent_log_pool":"default.rgw.log:intent","usage_log_pool":"default.rgw.log:usage","roles_pool":"default.rgw.meta:roles","reshard_pool":"default.rgw.log:reshard","user_keys_pool":"default.rgw.meta:users.keys","user_email_pool":"default.rgw.meta:users.email","user_swift_pool":"default.rgw.meta:users.swift","user_uid_pool":"default.rgw.meta:users.uid","otp_pool":"default.rgw.otp","notif_pool":"default.rgw.log:notif","topics_pool":"default.rgw.meta:topics","account_pool":"default.rgw.meta:accounts","group_pool":"default.rgw.meta:groups","system_key":{"access_key":"","secret_key":""},"placement_pools":[{"key":"default-placement","val":{"index_pool":"default.rgw.buckets.index","storage_classes":{"FROZEN":{"data_pool":"default.rgw.buckets.data.frozen"},"LUKEWARM":{"data_pool":"default.rgw.buckets.data.lukewarm"},"STANDARD":{"data_pool":"default.rgw.buckets.data"}},"data_extra_pool":"default.rgw.buckets.non-ec","index_type":0,"inline_data":true}}],"realm_id":"","restore_pool":"default.rgw.log:restore"} 2026-03-20T12:41:27.562 DEBUG:tasks.util.rgw: json result: {'id': '39159d26-247c-45da-824e-10bd55c6de4d', 'name': 'default', 'domain_root': 'default.rgw.meta:root', 'control_pool': 'default.rgw.control', 'dedup_pool': 'default.rgw.dedup', 'gc_pool': 'default.rgw.log:gc', 'lc_pool': 'default.rgw.log:lc', 'log_pool': 'default.rgw.log', 'intent_log_pool': 'default.rgw.log:intent', 'usage_log_pool': 'default.rgw.log:usage', 'roles_pool': 'default.rgw.meta:roles', 'reshard_pool': 'default.rgw.log:reshard', 'user_keys_pool': 'default.rgw.meta:users.keys', 'user_email_pool': 'default.rgw.meta:users.email', 'user_swift_pool': 'default.rgw.meta:users.swift', 'user_uid_pool': 'default.rgw.meta:users.uid', 'otp_pool': 'default.rgw.otp', 'notif_pool': 'default.rgw.log:notif', 'topics_pool': 'default.rgw.meta:topics', 'account_pool': 'default.rgw.meta:accounts', 'group_pool': 'default.rgw.meta:groups', 'system_key': {'access_key': '', 'secret_key': ''}, 'placement_pools': [{'key': 'default-placement', 'val': {'index_pool': 'default.rgw.buckets.index', 'storage_classes': {'FROZEN': {'data_pool': 'default.rgw.buckets.data.frozen'}, 'LUKEWARM': {'data_pool': 'default.rgw.buckets.data.lukewarm'}, 'STANDARD': {'data_pool': 'default.rgw.buckets.data'}}, 'data_extra_pool': 'default.rgw.buckets.non-ec', 'index_type': 0, 'inline_data': True}}], 'realm_id': '', 'restore_pool': 'default.rgw.log:restore'} 2026-03-20T12:41:27.562 INFO:tasks.util.rgw:rgwadmin: client.2 : ['user', 'list'] 2026-03-20T12:41:27.562 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.2', '--cluster', 'ceph', 'user', 'list'] 2026-03-20T12:41:27.563 DEBUG:teuthology.orchestra.run.vm09:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.2 --cluster ceph user list 2026-03-20T12:41:27.601 INFO:teuthology.orchestra.run.vm09.stderr:ignoring --setuser ceph since I am not root 2026-03-20T12:41:27.601 INFO:teuthology.orchestra.run.vm09.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T12:41:27.628 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.627+0000 7faeab9af900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.629+0000 7faeab9af900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:27.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.629+0000 7faeab9af900 20 realm 2026-03-20T12:41:27.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.629+0000 7faeab9af900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.629+0000 7faeab9af900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:27.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.629+0000 7faeab9af900 4 RGWPeriod::init failed to init realm id : (2) No such file or directory 2026-03-20T12:41:27.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.629+0000 7faeab9af900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.629+0000 7faeab9af900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:27.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.629+0000 7faeab9af900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.630 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.630+0000 7faeab9af900 20 rados_obj.operate() r=0 bl.length=46 2026-03-20T12:41:27.630 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.630+0000 7faeab9af900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.630 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.630+0000 7faeab9af900 20 rados_obj.operate() r=0 bl.length=1190 2026-03-20T12:41:27.630 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.630+0000 7faeab9af900 20 searching for the correct realm 2026-03-20T12:41:27.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.641+0000 7faeab9af900 20 RGWRados::pool_iterate: got zone_info.39159d26-247c-45da-824e-10bd55c6de4d 2026-03-20T12:41:27.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.641+0000 7faeab9af900 20 RGWRados::pool_iterate: got default.zonegroup. 2026-03-20T12:41:27.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.641+0000 7faeab9af900 20 RGWRados::pool_iterate: got default.zone. 2026-03-20T12:41:27.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.641+0000 7faeab9af900 20 RGWRados::pool_iterate: got zone_names.default 2026-03-20T12:41:27.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.641+0000 7faeab9af900 20 RGWRados::pool_iterate: got zonegroups_names.default 2026-03-20T12:41:27.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.641+0000 7faeab9af900 20 RGWRados::pool_iterate: got zonegroup_info.9626b2cd-be7f-4e66-a24c-00fdcd8682d7 2026-03-20T12:41:27.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.641+0000 7faeab9af900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.642+0000 7faeab9af900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:27.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.642+0000 7faeab9af900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.642+0000 7faeab9af900 20 rados_obj.operate() r=0 bl.length=46 2026-03-20T12:41:27.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.642+0000 7faeab9af900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.642+0000 7faeab9af900 20 rados_obj.operate() r=0 bl.length=470 2026-03-20T12:41:27.643 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.642+0000 7faeab9af900 20 zone default found 2026-03-20T12:41:27.643 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.642+0000 7faeab9af900 4 Realm: () 2026-03-20T12:41:27.643 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.642+0000 7faeab9af900 4 ZoneGroup: default (9626b2cd-be7f-4e66-a24c-00fdcd8682d7) 2026-03-20T12:41:27.643 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.642+0000 7faeab9af900 4 Zone: default (39159d26-247c-45da-824e-10bd55c6de4d) 2026-03-20T12:41:27.643 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.642+0000 7faeab9af900 10 cannot find current period zonegroup using local zonegroup configuration 2026-03-20T12:41:27.643 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.642+0000 7faeab9af900 20 zonegroup default 2026-03-20T12:41:27.643 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.642+0000 7faeab9af900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.643 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.643+0000 7faeab9af900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:27.643 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.643+0000 7faeab9af900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.643 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.643+0000 7faeab9af900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:27.643 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.643+0000 7faeab9af900 20 rados->read ofs=0 len=0 2026-03-20T12:41:27.643 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.643+0000 7faeab9af900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:41:27.643 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.643+0000 7faeab9af900 20 started sync module instance, tier type = 2026-03-20T12:41:27.643 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.643+0000 7faeab9af900 20 started zone id=39159d26-247c-45da-824e-10bd55c6de4d (name=default) with tier type = 2026-03-20T12:41:27.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.646+0000 7faeab9af900 20 add_watcher() i=0 2026-03-20T12:41:27.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.647+0000 7faeab9af900 20 add_watcher() i=3 2026-03-20T12:41:27.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.647+0000 7faeab9af900 20 add_watcher() i=5 2026-03-20T12:41:27.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.647+0000 7faeab9af900 20 add_watcher() i=7 2026-03-20T12:41:27.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.647+0000 7faeab9af900 20 add_watcher() i=1 2026-03-20T12:41:27.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.647+0000 7faeab9af900 20 add_watcher() i=2 2026-03-20T12:41:27.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.647+0000 7faeab9af900 20 add_watcher() i=6 2026-03-20T12:41:27.648 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.648+0000 7faeab9af900 20 add_watcher() i=4 2026-03-20T12:41:27.648 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.648+0000 7faeab9af900 2 all 8 watchers are set, enabling cache 2026-03-20T12:41:27.650 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.649+0000 7faeab9af900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T12:41:27.650 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.649+0000 7faeab9af900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T12:41:27.650 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.649+0000 7faeab9af900 5 note: GC not initialized 2026-03-20T12:41:27.650 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.650+0000 7fae56fe5640 20 reqs_thread_entry: start 2026-03-20T12:41:27.694 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.694+0000 7faeab9af900 20 init_complete bucket index max shards: 11 2026-03-20T12:41:27.694 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.694+0000 7faeab9af900 20 Filter name: none 2026-03-20T12:41:27.694 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.694+0000 7fae54fe1640 20 reqs_thread_entry: start 2026-03-20T12:41:27.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.706+0000 7faeab9af900 20 remove_watcher() i=0 2026-03-20T12:41:27.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.706+0000 7faeab9af900 2 removed watcher, disabling cache 2026-03-20T12:41:27.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.706+0000 7faeab9af900 20 remove_watcher() i=1 2026-03-20T12:41:27.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.706+0000 7faeab9af900 20 remove_watcher() i=2 2026-03-20T12:41:27.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.706+0000 7faeab9af900 20 remove_watcher() i=4 2026-03-20T12:41:27.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.706+0000 7faeab9af900 20 remove_watcher() i=7 2026-03-20T12:41:27.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.707+0000 7faeab9af900 20 remove_watcher() i=3 2026-03-20T12:41:27.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.707+0000 7faeab9af900 20 remove_watcher() i=6 2026-03-20T12:41:27.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.707+0000 7faeab9af900 20 remove_watcher() i=5 2026-03-20T12:41:27.713 INFO:teuthology.orchestra.run.vm09.stdout:[] 2026-03-20T12:41:27.713 DEBUG:tasks.util.rgw: json result: [] 2026-03-20T12:41:27.713 INFO:tasks.rgw:Configuring storage class = FROZEN 2026-03-20T12:41:27.714 INFO:tasks.util.rgw:rgwadmin: client.2 : ['zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN'] 2026-03-20T12:41:27.714 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.2', '--cluster', 'ceph', 'zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN'] 2026-03-20T12:41:27.714 DEBUG:teuthology.orchestra.run.vm09:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.2 --cluster ceph zonegroup placement add --rgw-zone default --placement-id default-placement --storage-class FROZEN 2026-03-20T12:41:27.791 INFO:teuthology.orchestra.run.vm09.stderr:ignoring --setuser ceph since I am not root 2026-03-20T12:41:27.791 INFO:teuthology.orchestra.run.vm09.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T12:41:27.806 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.806+0000 7f4da0f41900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T12:41:27.806 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.806+0000 7f4da0f41900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T12:41:27.806 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.806+0000 7f4d4afe5640 20 reqs_thread_entry: start 2026-03-20T12:41:27.869 INFO:teuthology.orchestra.run.vm09.stdout:[{"key":"default-placement","val":{"name":"default-placement","tags":[],"storage_classes":["FROZEN","LUKEWARM","STANDARD"]}}] 2026-03-20T12:41:27.870 DEBUG:tasks.util.rgw: json result: [{'key': 'default-placement', 'val': {'name': 'default-placement', 'tags': [], 'storage_classes': ['FROZEN', 'LUKEWARM', 'STANDARD']}}] 2026-03-20T12:41:27.870 INFO:tasks.util.rgw:rgwadmin: client.2 : ['zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN', '--data-pool', 'default.rgw.buckets.data.frozen'] 2026-03-20T12:41:27.870 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.2', '--cluster', 'ceph', 'zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN', '--data-pool', 'default.rgw.buckets.data.frozen'] 2026-03-20T12:41:27.870 DEBUG:teuthology.orchestra.run.vm09:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.2 --cluster ceph zone placement add --rgw-zone default --placement-id default-placement --storage-class FROZEN --data-pool default.rgw.buckets.data.frozen 2026-03-20T12:41:27.947 INFO:teuthology.orchestra.run.vm09.stderr:ignoring --setuser ceph since I am not root 2026-03-20T12:41:27.947 INFO:teuthology.orchestra.run.vm09.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T12:41:27.962 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.962+0000 7fa3fa93f900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T12:41:27.962 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.962+0000 7fa3fa93f900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T12:41:27.962 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:27.962+0000 7fa3a4fe1640 20 reqs_thread_entry: start 2026-03-20T12:41:28.015 INFO:teuthology.orchestra.run.vm09.stdout:{"id":"39159d26-247c-45da-824e-10bd55c6de4d","name":"default","domain_root":"default.rgw.meta:root","control_pool":"default.rgw.control","dedup_pool":"default.rgw.dedup","gc_pool":"default.rgw.log:gc","lc_pool":"default.rgw.log:lc","log_pool":"default.rgw.log","intent_log_pool":"default.rgw.log:intent","usage_log_pool":"default.rgw.log:usage","roles_pool":"default.rgw.meta:roles","reshard_pool":"default.rgw.log:reshard","user_keys_pool":"default.rgw.meta:users.keys","user_email_pool":"default.rgw.meta:users.email","user_swift_pool":"default.rgw.meta:users.swift","user_uid_pool":"default.rgw.meta:users.uid","otp_pool":"default.rgw.otp","notif_pool":"default.rgw.log:notif","topics_pool":"default.rgw.meta:topics","account_pool":"default.rgw.meta:accounts","group_pool":"default.rgw.meta:groups","system_key":{"access_key":"","secret_key":""},"placement_pools":[{"key":"default-placement","val":{"index_pool":"default.rgw.buckets.index","storage_classes":{"FROZEN":{"data_pool":"default.rgw.buckets.data.frozen"},"LUKEWARM":{"data_pool":"default.rgw.buckets.data.lukewarm"},"STANDARD":{"data_pool":"default.rgw.buckets.data"}},"data_extra_pool":"default.rgw.buckets.non-ec","index_type":0,"inline_data":true}}],"realm_id":"","restore_pool":"default.rgw.log:restore"} 2026-03-20T12:41:28.015 DEBUG:tasks.util.rgw: json result: {'id': '39159d26-247c-45da-824e-10bd55c6de4d', 'name': 'default', 'domain_root': 'default.rgw.meta:root', 'control_pool': 'default.rgw.control', 'dedup_pool': 'default.rgw.dedup', 'gc_pool': 'default.rgw.log:gc', 'lc_pool': 'default.rgw.log:lc', 'log_pool': 'default.rgw.log', 'intent_log_pool': 'default.rgw.log:intent', 'usage_log_pool': 'default.rgw.log:usage', 'roles_pool': 'default.rgw.meta:roles', 'reshard_pool': 'default.rgw.log:reshard', 'user_keys_pool': 'default.rgw.meta:users.keys', 'user_email_pool': 'default.rgw.meta:users.email', 'user_swift_pool': 'default.rgw.meta:users.swift', 'user_uid_pool': 'default.rgw.meta:users.uid', 'otp_pool': 'default.rgw.otp', 'notif_pool': 'default.rgw.log:notif', 'topics_pool': 'default.rgw.meta:topics', 'account_pool': 'default.rgw.meta:accounts', 'group_pool': 'default.rgw.meta:groups', 'system_key': {'access_key': '', 'secret_key': ''}, 'placement_pools': [{'key': 'default-placement', 'val': {'index_pool': 'default.rgw.buckets.index', 'storage_classes': {'FROZEN': {'data_pool': 'default.rgw.buckets.data.frozen'}, 'LUKEWARM': {'data_pool': 'default.rgw.buckets.data.lukewarm'}, 'STANDARD': {'data_pool': 'default.rgw.buckets.data'}}, 'data_extra_pool': 'default.rgw.buckets.non-ec', 'index_type': 0, 'inline_data': True}}], 'realm_id': '', 'restore_pool': 'default.rgw.log:restore'} 2026-03-20T12:41:28.015 INFO:tasks.rgw:Configuring storage class = LUKEWARM 2026-03-20T12:41:28.015 INFO:tasks.util.rgw:rgwadmin: client.2 : ['zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM'] 2026-03-20T12:41:28.015 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.2', '--cluster', 'ceph', 'zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM'] 2026-03-20T12:41:28.015 DEBUG:teuthology.orchestra.run.vm09:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.2 --cluster ceph zonegroup placement add --rgw-zone default --placement-id default-placement --storage-class LUKEWARM 2026-03-20T12:41:28.094 INFO:teuthology.orchestra.run.vm09.stderr:ignoring --setuser ceph since I am not root 2026-03-20T12:41:28.094 INFO:teuthology.orchestra.run.vm09.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T12:41:28.109 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:28.108+0000 7f326d20f900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T12:41:28.109 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:28.108+0000 7f326d20f900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T12:41:28.109 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:28.109+0000 7f3216fe5640 20 reqs_thread_entry: start 2026-03-20T12:41:28.119 INFO:teuthology.orchestra.run.vm09.stdout:[{"key":"default-placement","val":{"name":"default-placement","tags":[],"storage_classes":["FROZEN","LUKEWARM","STANDARD"]}}] 2026-03-20T12:41:28.119 DEBUG:tasks.util.rgw: json result: [{'key': 'default-placement', 'val': {'name': 'default-placement', 'tags': [], 'storage_classes': ['FROZEN', 'LUKEWARM', 'STANDARD']}}] 2026-03-20T12:41:28.119 INFO:tasks.util.rgw:rgwadmin: client.2 : ['zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM', '--data-pool', 'default.rgw.buckets.data.lukewarm'] 2026-03-20T12:41:28.119 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.2', '--cluster', 'ceph', 'zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM', '--data-pool', 'default.rgw.buckets.data.lukewarm'] 2026-03-20T12:41:28.120 DEBUG:teuthology.orchestra.run.vm09:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.2 --cluster ceph zone placement add --rgw-zone default --placement-id default-placement --storage-class LUKEWARM --data-pool default.rgw.buckets.data.lukewarm 2026-03-20T12:41:28.198 INFO:teuthology.orchestra.run.vm09.stderr:ignoring --setuser ceph since I am not root 2026-03-20T12:41:28.198 INFO:teuthology.orchestra.run.vm09.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T12:41:28.213 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:28.212+0000 7f9bb8e10900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T12:41:28.213 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:28.212+0000 7f9bb8e10900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T12:41:28.213 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-20T12:41:28.213+0000 7f9b62fdd640 20 reqs_thread_entry: start 2026-03-20T12:41:28.224 INFO:teuthology.orchestra.run.vm09.stdout:{"id":"39159d26-247c-45da-824e-10bd55c6de4d","name":"default","domain_root":"default.rgw.meta:root","control_pool":"default.rgw.control","dedup_pool":"default.rgw.dedup","gc_pool":"default.rgw.log:gc","lc_pool":"default.rgw.log:lc","log_pool":"default.rgw.log","intent_log_pool":"default.rgw.log:intent","usage_log_pool":"default.rgw.log:usage","roles_pool":"default.rgw.meta:roles","reshard_pool":"default.rgw.log:reshard","user_keys_pool":"default.rgw.meta:users.keys","user_email_pool":"default.rgw.meta:users.email","user_swift_pool":"default.rgw.meta:users.swift","user_uid_pool":"default.rgw.meta:users.uid","otp_pool":"default.rgw.otp","notif_pool":"default.rgw.log:notif","topics_pool":"default.rgw.meta:topics","account_pool":"default.rgw.meta:accounts","group_pool":"default.rgw.meta:groups","system_key":{"access_key":"","secret_key":""},"placement_pools":[{"key":"default-placement","val":{"index_pool":"default.rgw.buckets.index","storage_classes":{"FROZEN":{"data_pool":"default.rgw.buckets.data.frozen"},"LUKEWARM":{"data_pool":"default.rgw.buckets.data.lukewarm"},"STANDARD":{"data_pool":"default.rgw.buckets.data"}},"data_extra_pool":"default.rgw.buckets.non-ec","index_type":0,"inline_data":true}}],"realm_id":"","restore_pool":"default.rgw.log:restore"} 2026-03-20T12:41:28.224 DEBUG:tasks.util.rgw: json result: {'id': '39159d26-247c-45da-824e-10bd55c6de4d', 'name': 'default', 'domain_root': 'default.rgw.meta:root', 'control_pool': 'default.rgw.control', 'dedup_pool': 'default.rgw.dedup', 'gc_pool': 'default.rgw.log:gc', 'lc_pool': 'default.rgw.log:lc', 'log_pool': 'default.rgw.log', 'intent_log_pool': 'default.rgw.log:intent', 'usage_log_pool': 'default.rgw.log:usage', 'roles_pool': 'default.rgw.meta:roles', 'reshard_pool': 'default.rgw.log:reshard', 'user_keys_pool': 'default.rgw.meta:users.keys', 'user_email_pool': 'default.rgw.meta:users.email', 'user_swift_pool': 'default.rgw.meta:users.swift', 'user_uid_pool': 'default.rgw.meta:users.uid', 'otp_pool': 'default.rgw.otp', 'notif_pool': 'default.rgw.log:notif', 'topics_pool': 'default.rgw.meta:topics', 'account_pool': 'default.rgw.meta:accounts', 'group_pool': 'default.rgw.meta:groups', 'system_key': {'access_key': '', 'secret_key': ''}, 'placement_pools': [{'key': 'default-placement', 'val': {'index_pool': 'default.rgw.buckets.index', 'storage_classes': {'FROZEN': {'data_pool': 'default.rgw.buckets.data.frozen'}, 'LUKEWARM': {'data_pool': 'default.rgw.buckets.data.lukewarm'}, 'STANDARD': {'data_pool': 'default.rgw.buckets.data'}}, 'data_extra_pool': 'default.rgw.buckets.non-ec', 'index_type': 0, 'inline_data': True}}], 'realm_id': '', 'restore_pool': 'default.rgw.log:restore'} 2026-03-20T12:41:28.224 INFO:tasks.rgw:Starting rgw... 2026-03-20T12:41:28.224 INFO:tasks.rgw:rgw client.0 config is {} 2026-03-20T12:41:28.224 INFO:tasks.rgw:Using beast as radosgw frontend 2026-03-20T12:41:28.224 DEBUG:teuthology.orchestra.run.vm00:> sudo echo -n http://vm00.local:80 | sudo tee /home/ubuntu/cephtest/url_file 2026-03-20T12:41:28.253 INFO:teuthology.orchestra.run.vm00.stdout:http://vm00.local:80 2026-03-20T12:41:28.253 DEBUG:teuthology.orchestra.run.vm00:> sudo chown ceph /home/ubuntu/cephtest/url_file 2026-03-20T12:41:28.318 INFO:tasks.rgw.client.0:Restarting daemon 2026-03-20T12:41:28.318 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term radosgw --rgw-frontends 'beast port=80' -n client.0 --cluster ceph -k /etc/ceph/ceph.client.0.keyring --log-file /var/log/ceph/rgw.ceph.client.0.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.0.sock --foreground | sudo tee /var/log/ceph/rgw.ceph.client.0.stdout 2>&1 2026-03-20T12:41:28.360 INFO:tasks.rgw.client.0:Started 2026-03-20T12:41:28.360 INFO:tasks.rgw:rgw client.1 config is {} 2026-03-20T12:41:28.360 INFO:tasks.rgw:Using beast as radosgw frontend 2026-03-20T12:41:28.360 DEBUG:teuthology.orchestra.run.vm06:> sudo echo -n http://vm06.local:80 | sudo tee /home/ubuntu/cephtest/url_file 2026-03-20T12:41:28.393 INFO:teuthology.orchestra.run.vm06.stdout:http://vm06.local:80 2026-03-20T12:41:28.393 DEBUG:teuthology.orchestra.run.vm06:> sudo chown ceph /home/ubuntu/cephtest/url_file 2026-03-20T12:41:28.461 INFO:tasks.rgw.client.1:Restarting daemon 2026-03-20T12:41:28.461 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term radosgw --rgw-frontends 'beast port=80' -n client.1 --cluster ceph -k /etc/ceph/ceph.client.1.keyring --log-file /var/log/ceph/rgw.ceph.client.1.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.1.sock --foreground | sudo tee /var/log/ceph/rgw.ceph.client.1.stdout 2>&1 2026-03-20T12:41:28.502 INFO:tasks.rgw.client.1:Started 2026-03-20T12:41:28.502 INFO:tasks.rgw:rgw client.2 config is {} 2026-03-20T12:41:28.502 INFO:tasks.rgw:Using beast as radosgw frontend 2026-03-20T12:41:28.502 DEBUG:teuthology.orchestra.run.vm09:> sudo echo -n http://vm09.local:80 | sudo tee /home/ubuntu/cephtest/url_file 2026-03-20T12:41:28.533 INFO:teuthology.orchestra.run.vm09.stdout:http://vm09.local:80 2026-03-20T12:41:28.533 DEBUG:teuthology.orchestra.run.vm09:> sudo chown ceph /home/ubuntu/cephtest/url_file 2026-03-20T12:41:28.600 INFO:tasks.rgw.client.2:Restarting daemon 2026-03-20T12:41:28.600 DEBUG:teuthology.orchestra.run.vm09:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term radosgw --rgw-frontends 'beast port=80' -n client.2 --cluster ceph -k /etc/ceph/ceph.client.2.keyring --log-file /var/log/ceph/rgw.ceph.client.2.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.2.sock --foreground | sudo tee /var/log/ceph/rgw.ceph.client.2.stdout 2>&1 2026-03-20T12:41:28.642 INFO:tasks.rgw.client.2:Started 2026-03-20T12:41:28.642 INFO:tasks.rgw:Polling client.0 until it starts accepting connections on http://vm00.local:80/ 2026-03-20T12:41:28.642 DEBUG:teuthology.orchestra.run.vm00:> curl http://vm00.local:80/ 2026-03-20T12:41:28.686 DEBUG:teuthology.orchestra.run:got remote process result: 7 2026-03-20T12:41:28.686 INFO:teuthology.orchestra.run.vm00.stderr: % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-20T12:41:28.686 INFO:teuthology.orchestra.run.vm00.stderr: Dload Upload Total Spent Left Speed 2026-03-20T12:41:28.686 INFO:teuthology.orchestra.run.vm00.stderr: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 2026-03-20T12:41:28.686 INFO:teuthology.orchestra.run.vm00.stderr:curl: (7) Failed to connect to vm00.local port 80: Connection refused 2026-03-20T12:41:29.687 DEBUG:teuthology.orchestra.run.vm00:> curl http://vm00.local:80/ 2026-03-20T12:41:29.705 INFO:teuthology.orchestra.run.vm00.stderr: % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-20T12:41:29.705 INFO:teuthology.orchestra.run.vm00.stderr: Dload Upload Total Spent Left Speed 2026-03-20T12:41:29.706 INFO:teuthology.orchestra.run.vm00.stderr: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 187 0 187 0 0 182k 0 --:--:-- --:--:-- --:--:-- 182k 2026-03-20T12:41:29.706 INFO:teuthology.orchestra.run.vm00.stdout:anonymous 2026-03-20T12:41:29.707 INFO:tasks.rgw:Polling client.1 until it starts accepting connections on http://vm06.local:80/ 2026-03-20T12:41:29.707 DEBUG:teuthology.orchestra.run.vm06:> curl http://vm06.local:80/ 2026-03-20T12:41:29.732 INFO:teuthology.orchestra.run.vm06.stderr: % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-20T12:41:29.732 INFO:teuthology.orchestra.run.vm06.stderr: Dload Upload Total Spent Left Speed 2026-03-20T12:41:29.732 INFO:teuthology.orchestra.run.vm06.stderr: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 187 0 187 0 0 46750 0 --:--:-- --:--:-- --:--:-- 46750 2026-03-20T12:41:29.733 INFO:teuthology.orchestra.run.vm06.stdout:anonymous 2026-03-20T12:41:29.733 INFO:tasks.rgw:Polling client.2 until it starts accepting connections on http://vm09.local:80/ 2026-03-20T12:41:29.733 DEBUG:teuthology.orchestra.run.vm09:> curl http://vm09.local:80/ 2026-03-20T12:41:29.752 INFO:teuthology.orchestra.run.vm09.stderr: % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-20T12:41:29.752 INFO:teuthology.orchestra.run.vm09.stderr: Dload Upload Total Spent Left Speed 2026-03-20T12:41:29.754 INFO:teuthology.orchestra.run.vm09.stderr: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 187 0 187 0 0 182k 0 --:--:-- --:--:-- --:--:-- 182k 2026-03-20T12:41:29.754 INFO:teuthology.orchestra.run.vm09.stdout:anonymous 2026-03-20T12:41:29.754 INFO:teuthology.run_tasks:Running task tox... 2026-03-20T12:41:29.757 INFO:tasks.tox:Deploying tox from pip... 2026-03-20T12:41:29.757 DEBUG:teuthology.orchestra.run.vm00:> curl -LsSf https://astral.sh/uv/install.sh | sh 2026-03-20T12:41:30.028 INFO:teuthology.orchestra.run.vm00.stderr:downloading uv 0.10.12 x86_64-unknown-linux-gnu 2026-03-20T12:41:30.528 INFO:teuthology.orchestra.run.vm00.stderr:no checksums to verify 2026-03-20T12:41:30.797 INFO:teuthology.orchestra.run.vm00.stdout:installing to /home/ubuntu/.local/bin 2026-03-20T12:41:30.802 INFO:teuthology.orchestra.run.vm00.stdout: uv 2026-03-20T12:41:30.804 INFO:teuthology.orchestra.run.vm00.stdout: uvx 2026-03-20T12:41:30.808 INFO:teuthology.orchestra.run.vm00.stdout:everything's installed! 2026-03-20T12:41:30.814 DEBUG:teuthology.orchestra.run.vm00:> $HOME/.local/bin/uv python install 3.11 2026-03-20T12:41:30.942 INFO:teuthology.orchestra.run.vm00.stderr:Downloading cpython-3.11.15-linux-x86_64-gnu (download) (29.8MiB) 2026-03-20T12:41:31.800 INFO:teuthology.orchestra.run.vm00.stderr: Downloaded cpython-3.11.15-linux-x86_64-gnu (download) 2026-03-20T12:41:31.802 INFO:teuthology.orchestra.run.vm00.stderr:Installed Python 3.11.15 in 964ms 2026-03-20T12:41:31.802 INFO:teuthology.orchestra.run.vm00.stderr: + cpython-3.11.15-linux-x86_64-gnu (python3.11) 2026-03-20T12:41:31.803 DEBUG:teuthology.orchestra.run.vm00:> $HOME/.local/bin/uv run --python 3.11 -m venv /home/ubuntu/cephtest/tox-venv 2026-03-20T12:41:34.392 DEBUG:teuthology.orchestra.run.vm00:> source /home/ubuntu/cephtest/tox-venv/bin/activate && pip install tox 2026-03-20T12:41:34.682 INFO:teuthology.orchestra.run.vm00.stdout:Collecting tox 2026-03-20T12:41:34.710 INFO:teuthology.orchestra.run.vm00.stdout: Downloading tox-4.50.3-py3-none-any.whl.metadata (3.6 kB) 2026-03-20T12:41:34.739 INFO:teuthology.orchestra.run.vm00.stdout:Collecting cachetools>=7.0.3 (from tox) 2026-03-20T12:41:34.748 INFO:teuthology.orchestra.run.vm00.stdout: Downloading cachetools-7.0.5-py3-none-any.whl.metadata (5.6 kB) 2026-03-20T12:41:34.771 INFO:teuthology.orchestra.run.vm00.stdout:Collecting colorama>=0.4.6 (from tox) 2026-03-20T12:41:34.780 INFO:teuthology.orchestra.run.vm00.stdout: Downloading colorama-0.4.6-py2.py3-none-any.whl.metadata (17 kB) 2026-03-20T12:41:34.808 INFO:teuthology.orchestra.run.vm00.stdout:Collecting filelock>=3.25 (from tox) 2026-03-20T12:41:34.819 INFO:teuthology.orchestra.run.vm00.stdout: Downloading filelock-3.25.2-py3-none-any.whl.metadata (2.0 kB) 2026-03-20T12:41:34.845 INFO:teuthology.orchestra.run.vm00.stdout:Collecting packaging>=26 (from tox) 2026-03-20T12:41:34.855 INFO:teuthology.orchestra.run.vm00.stdout: Downloading packaging-26.0-py3-none-any.whl.metadata (3.3 kB) 2026-03-20T12:41:34.879 INFO:teuthology.orchestra.run.vm00.stdout:Collecting platformdirs>=4.9.4 (from tox) 2026-03-20T12:41:34.887 INFO:teuthology.orchestra.run.vm00.stdout: Downloading platformdirs-4.9.4-py3-none-any.whl.metadata (4.7 kB) 2026-03-20T12:41:34.908 INFO:teuthology.orchestra.run.vm00.stdout:Collecting pluggy>=1.6 (from tox) 2026-03-20T12:41:34.917 INFO:teuthology.orchestra.run.vm00.stdout: Downloading pluggy-1.6.0-py3-none-any.whl.metadata (4.8 kB) 2026-03-20T12:41:34.935 INFO:teuthology.orchestra.run.vm00.stdout:Collecting pyproject-api>=1.10 (from tox) 2026-03-20T12:41:34.944 INFO:teuthology.orchestra.run.vm00.stdout: Downloading pyproject_api-1.10.0-py3-none-any.whl.metadata (2.7 kB) 2026-03-20T12:41:34.961 INFO:teuthology.orchestra.run.vm00.stdout:Collecting tomli-w>=1.2 (from tox) 2026-03-20T12:41:34.969 INFO:teuthology.orchestra.run.vm00.stdout: Downloading tomli_w-1.2.0-py3-none-any.whl.metadata (5.7 kB) 2026-03-20T12:41:35.016 INFO:teuthology.orchestra.run.vm00.stdout:Collecting virtualenv>=21.1 (from tox) 2026-03-20T12:41:35.025 INFO:teuthology.orchestra.run.vm00.stdout: Downloading virtualenv-21.2.0-py3-none-any.whl.metadata (3.5 kB) 2026-03-20T12:41:35.062 INFO:teuthology.orchestra.run.vm00.stdout:Collecting distlib<1,>=0.3.7 (from virtualenv>=21.1->tox) 2026-03-20T12:41:35.071 INFO:teuthology.orchestra.run.vm00.stdout: Downloading distlib-0.4.0-py2.py3-none-any.whl.metadata (5.2 kB) 2026-03-20T12:41:35.096 INFO:teuthology.orchestra.run.vm00.stdout:Collecting python-discovery>=1 (from virtualenv>=21.1->tox) 2026-03-20T12:41:35.105 INFO:teuthology.orchestra.run.vm00.stdout: Downloading python_discovery-1.2.0-py3-none-any.whl.metadata (5.4 kB) 2026-03-20T12:41:35.128 INFO:teuthology.orchestra.run.vm00.stdout:Downloading tox-4.50.3-py3-none-any.whl (207 kB) 2026-03-20T12:41:35.148 INFO:teuthology.orchestra.run.vm00.stdout: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 207.7/207.7 kB 11.0 MB/s eta 0:00:00 2026-03-20T12:41:35.157 INFO:teuthology.orchestra.run.vm00.stdout:Downloading cachetools-7.0.5-py3-none-any.whl (13 kB) 2026-03-20T12:41:35.166 INFO:teuthology.orchestra.run.vm00.stdout:Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB) 2026-03-20T12:41:35.176 INFO:teuthology.orchestra.run.vm00.stdout:Downloading filelock-3.25.2-py3-none-any.whl (26 kB) 2026-03-20T12:41:35.188 INFO:teuthology.orchestra.run.vm00.stdout:Downloading packaging-26.0-py3-none-any.whl (74 kB) 2026-03-20T12:41:35.191 INFO:teuthology.orchestra.run.vm00.stdout: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 74.4/74.4 kB 31.2 MB/s eta 0:00:00 2026-03-20T12:41:35.200 INFO:teuthology.orchestra.run.vm00.stdout:Downloading platformdirs-4.9.4-py3-none-any.whl (21 kB) 2026-03-20T12:41:35.210 INFO:teuthology.orchestra.run.vm00.stdout:Downloading pluggy-1.6.0-py3-none-any.whl (20 kB) 2026-03-20T12:41:35.219 INFO:teuthology.orchestra.run.vm00.stdout:Downloading pyproject_api-1.10.0-py3-none-any.whl (13 kB) 2026-03-20T12:41:35.229 INFO:teuthology.orchestra.run.vm00.stdout:Downloading tomli_w-1.2.0-py3-none-any.whl (6.7 kB) 2026-03-20T12:41:35.238 INFO:teuthology.orchestra.run.vm00.stdout:Downloading virtualenv-21.2.0-py3-none-any.whl (5.8 MB) 2026-03-20T12:41:35.312 INFO:teuthology.orchestra.run.vm00.stdout: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.8/5.8 MB 80.7 MB/s eta 0:00:00 2026-03-20T12:41:35.321 INFO:teuthology.orchestra.run.vm00.stdout:Downloading distlib-0.4.0-py2.py3-none-any.whl (469 kB) 2026-03-20T12:41:35.328 INFO:teuthology.orchestra.run.vm00.stdout: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 469.0/469.0 kB 81.2 MB/s eta 0:00:00 2026-03-20T12:41:35.337 INFO:teuthology.orchestra.run.vm00.stdout:Downloading python_discovery-1.2.0-py3-none-any.whl (31 kB) 2026-03-20T12:41:35.386 INFO:teuthology.orchestra.run.vm00.stdout:Installing collected packages: distlib, tomli-w, pluggy, platformdirs, packaging, filelock, colorama, cachetools, python-discovery, pyproject-api, virtualenv, tox 2026-03-20T12:41:35.712 INFO:teuthology.orchestra.run.vm00.stdout:Successfully installed cachetools-7.0.5 colorama-0.4.6 distlib-0.4.0 filelock-3.25.2 packaging-26.0 platformdirs-4.9.4 pluggy-1.6.0 pyproject-api-1.10.0 python-discovery-1.2.0 tomli-w-1.2.0 tox-4.50.3 virtualenv-21.2.0 2026-03-20T12:41:35.771 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-20T12:41:35.771 INFO:teuthology.orchestra.run.vm00.stderr:[notice] A new release of pip is available: 24.0 -> 26.0.1 2026-03-20T12:41:35.771 INFO:teuthology.orchestra.run.vm00.stderr:[notice] To update, run: pip install --upgrade pip 2026-03-20T12:41:35.820 INFO:teuthology.run_tasks:Running task tox... 2026-03-20T12:41:35.823 INFO:tasks.tox:Deploying tox from pip... 2026-03-20T12:41:35.823 DEBUG:teuthology.orchestra.run.vm00:> curl -LsSf https://astral.sh/uv/install.sh | sh 2026-03-20T12:41:36.092 INFO:teuthology.orchestra.run.vm00.stderr:downloading uv 0.10.12 x86_64-unknown-linux-gnu 2026-03-20T12:41:36.495 INFO:teuthology.orchestra.run.vm00.stderr:no checksums to verify 2026-03-20T12:41:36.837 INFO:teuthology.orchestra.run.vm00.stdout:installing to /home/ubuntu/.local/bin 2026-03-20T12:41:36.842 INFO:teuthology.orchestra.run.vm00.stdout: uv 2026-03-20T12:41:36.844 INFO:teuthology.orchestra.run.vm00.stdout: uvx 2026-03-20T12:41:36.853 INFO:teuthology.orchestra.run.vm00.stdout:everything's installed! 2026-03-20T12:41:36.859 DEBUG:teuthology.orchestra.run.vm00:> $HOME/.local/bin/uv python install 3.11 2026-03-20T12:41:36.918 INFO:teuthology.orchestra.run.vm00.stderr:Python 3.11 is already installed 2026-03-20T12:41:36.920 DEBUG:teuthology.orchestra.run.vm00:> $HOME/.local/bin/uv run --python 3.11 -m venv /home/ubuntu/cephtest/tox-venv 2026-03-20T12:41:37.925 DEBUG:teuthology.orchestra.run.vm00:> source /home/ubuntu/cephtest/tox-venv/bin/activate && pip install tox 2026-03-20T12:41:38.106 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: tox in ./cephtest/tox-venv/lib/python3.11/site-packages (4.50.3) 2026-03-20T12:41:38.110 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: cachetools>=7.0.3 in ./cephtest/tox-venv/lib/python3.11/site-packages (from tox) (7.0.5) 2026-03-20T12:41:38.111 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: colorama>=0.4.6 in ./cephtest/tox-venv/lib/python3.11/site-packages (from tox) (0.4.6) 2026-03-20T12:41:38.111 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: filelock>=3.25 in ./cephtest/tox-venv/lib/python3.11/site-packages (from tox) (3.25.2) 2026-03-20T12:41:38.112 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: packaging>=26 in ./cephtest/tox-venv/lib/python3.11/site-packages (from tox) (26.0) 2026-03-20T12:41:38.112 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: platformdirs>=4.9.4 in ./cephtest/tox-venv/lib/python3.11/site-packages (from tox) (4.9.4) 2026-03-20T12:41:38.113 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: pluggy>=1.6 in ./cephtest/tox-venv/lib/python3.11/site-packages (from tox) (1.6.0) 2026-03-20T12:41:38.113 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: pyproject-api>=1.10 in ./cephtest/tox-venv/lib/python3.11/site-packages (from tox) (1.10.0) 2026-03-20T12:41:38.114 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: tomli-w>=1.2 in ./cephtest/tox-venv/lib/python3.11/site-packages (from tox) (1.2.0) 2026-03-20T12:41:38.114 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: virtualenv>=21.1 in ./cephtest/tox-venv/lib/python3.11/site-packages (from tox) (21.2.0) 2026-03-20T12:41:38.123 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: distlib<1,>=0.3.7 in ./cephtest/tox-venv/lib/python3.11/site-packages (from virtualenv>=21.1->tox) (0.4.0) 2026-03-20T12:41:38.124 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: python-discovery>=1 in ./cephtest/tox-venv/lib/python3.11/site-packages (from virtualenv>=21.1->tox) (1.2.0) 2026-03-20T12:41:38.160 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-20T12:41:38.160 INFO:teuthology.orchestra.run.vm00.stderr:[notice] A new release of pip is available: 24.0 -> 26.0.1 2026-03-20T12:41:38.160 INFO:teuthology.orchestra.run.vm00.stderr:[notice] To update, run: pip install --upgrade pip 2026-03-20T12:41:38.200 INFO:teuthology.run_tasks:Running task dedup-tests... 2026-03-20T12:41:38.204 DEBUG:tasks.dedup_tests:config is {'client.0': {'rgw_server': 'client.0'}} 2026-03-20T12:41:38.204 INFO:tasks.dedup_tests:Downloading dedup-tests... 2026-03-20T12:41:38.204 INFO:tasks.dedup_tests:Using branch tt-tentacle from https://github.com/kshtsk/ceph.git for dedup tests 2026-03-20T12:41:38.204 DEBUG:teuthology.orchestra.run.vm00:> git clone -b tt-tentacle https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/ceph 2026-03-20T12:41:38.220 INFO:teuthology.orchestra.run.vm00.stderr:Cloning into '/home/ubuntu/cephtest/ceph'... 2026-03-20T12:42:22.435 INFO:teuthology.orchestra.run.vm00.stderr:Updating files: 92% (12532/13597) Updating files: 93% (12646/13597) Updating files: 94% (12782/13597) Updating files: 95% (12918/13597) Updating files: 96% (13054/13597) Updating files: 97% (13190/13597) Updating files: 98% (13326/13597) Updating files: 99% (13462/13597) Updating files: 100% (13597/13597) Updating files: 100% (13597/13597), done. 2026-03-20T12:42:22.460 INFO:tasks.dedup_tests:Creating rgw user... 2026-03-20T12:42:22.460 DEBUG:tasks.dedup_tests:Creating user foo.client.0 on client.0 2026-03-20T12:42:22.460 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user create --uid foo.client.0 --display-name 'Mr. foo.client.0' --access-key MESNBKHWOVPBVNINOGEU --secret MGGRRLRWdcLApHZfqklt0MXnMV5JYRgLOU16EH0O9gKC2QPMR+kiAg== --cluster ceph 2026-03-20T12:42:22.541 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-03-20T12:42:22.541 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T12:42:22.561 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.560+0000 7ff2f5b2f900 20 rados->read ofs=0 len=0 2026-03-20T12:42:22.562 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.562+0000 7ff2f5b2f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:42:22.562 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.562+0000 7ff2f5b2f900 20 realm 2026-03-20T12:42:22.562 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.562+0000 7ff2f5b2f900 20 rados->read ofs=0 len=0 2026-03-20T12:42:22.562 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.562+0000 7ff2f5b2f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:42:22.562 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.562+0000 7ff2f5b2f900 4 RGWPeriod::init failed to init realm id : (2) No such file or directory 2026-03-20T12:42:22.562 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.562+0000 7ff2f5b2f900 20 rados->read ofs=0 len=0 2026-03-20T12:42:22.562 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.562+0000 7ff2f5b2f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:42:22.562 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.562+0000 7ff2f5b2f900 20 rados->read ofs=0 len=0 2026-03-20T12:42:22.563 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.563+0000 7ff2f5b2f900 20 rados_obj.operate() r=0 bl.length=46 2026-03-20T12:42:22.563 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.563+0000 7ff2f5b2f900 20 rados->read ofs=0 len=0 2026-03-20T12:42:22.563 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.563+0000 7ff2f5b2f900 20 rados_obj.operate() r=0 bl.length=1190 2026-03-20T12:42:22.563 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.563+0000 7ff2f5b2f900 20 searching for the correct realm 2026-03-20T12:42:22.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.573+0000 7ff2f5b2f900 20 RGWRados::pool_iterate: got zone_info.39159d26-247c-45da-824e-10bd55c6de4d 2026-03-20T12:42:22.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.573+0000 7ff2f5b2f900 20 RGWRados::pool_iterate: got default.zonegroup. 2026-03-20T12:42:22.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.573+0000 7ff2f5b2f900 20 RGWRados::pool_iterate: got default.zone. 2026-03-20T12:42:22.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.573+0000 7ff2f5b2f900 20 RGWRados::pool_iterate: got zone_names.default 2026-03-20T12:42:22.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.573+0000 7ff2f5b2f900 20 RGWRados::pool_iterate: got zonegroups_names.default 2026-03-20T12:42:22.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.573+0000 7ff2f5b2f900 20 RGWRados::pool_iterate: got zonegroup_info.9626b2cd-be7f-4e66-a24c-00fdcd8682d7 2026-03-20T12:42:22.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.573+0000 7ff2f5b2f900 20 rados->read ofs=0 len=0 2026-03-20T12:42:22.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.573+0000 7ff2f5b2f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:42:22.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.573+0000 7ff2f5b2f900 20 rados->read ofs=0 len=0 2026-03-20T12:42:22.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.573+0000 7ff2f5b2f900 20 rados_obj.operate() r=0 bl.length=46 2026-03-20T12:42:22.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.573+0000 7ff2f5b2f900 20 rados->read ofs=0 len=0 2026-03-20T12:42:22.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.573+0000 7ff2f5b2f900 20 rados_obj.operate() r=0 bl.length=470 2026-03-20T12:42:22.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.573+0000 7ff2f5b2f900 20 zone default found 2026-03-20T12:42:22.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.573+0000 7ff2f5b2f900 4 Realm: () 2026-03-20T12:42:22.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.573+0000 7ff2f5b2f900 4 ZoneGroup: default (9626b2cd-be7f-4e66-a24c-00fdcd8682d7) 2026-03-20T12:42:22.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.573+0000 7ff2f5b2f900 4 Zone: default (39159d26-247c-45da-824e-10bd55c6de4d) 2026-03-20T12:42:22.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.573+0000 7ff2f5b2f900 10 cannot find current period zonegroup using local zonegroup configuration 2026-03-20T12:42:22.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.573+0000 7ff2f5b2f900 20 zonegroup default 2026-03-20T12:42:22.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.573+0000 7ff2f5b2f900 20 rados->read ofs=0 len=0 2026-03-20T12:42:22.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.574+0000 7ff2f5b2f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:42:22.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.574+0000 7ff2f5b2f900 20 rados->read ofs=0 len=0 2026-03-20T12:42:22.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.574+0000 7ff2f5b2f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:42:22.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.574+0000 7ff2f5b2f900 20 rados->read ofs=0 len=0 2026-03-20T12:42:22.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.574+0000 7ff2f5b2f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:42:22.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.574+0000 7ff2f5b2f900 20 started sync module instance, tier type = 2026-03-20T12:42:22.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.574+0000 7ff2f5b2f900 20 started zone id=39159d26-247c-45da-824e-10bd55c6de4d (name=default) with tier type = 2026-03-20T12:42:22.578 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.578+0000 7ff2f5b2f900 20 add_watcher() i=5 2026-03-20T12:42:22.578 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.578+0000 7ff2f5b2f900 20 add_watcher() i=6 2026-03-20T12:42:22.578 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.578+0000 7ff2f5b2f900 20 add_watcher() i=3 2026-03-20T12:42:22.578 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.578+0000 7ff2f5b2f900 20 add_watcher() i=4 2026-03-20T12:42:22.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.579+0000 7ff2f5b2f900 20 add_watcher() i=0 2026-03-20T12:42:22.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.579+0000 7ff2f5b2f900 20 add_watcher() i=7 2026-03-20T12:42:22.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.579+0000 7ff2f5b2f900 20 add_watcher() i=1 2026-03-20T12:42:22.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.579+0000 7ff2f5b2f900 20 add_watcher() i=2 2026-03-20T12:42:22.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.579+0000 7ff2f5b2f900 2 all 8 watchers are set, enabling cache 2026-03-20T12:42:22.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.581+0000 7ff2f5b2f900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T12:42:22.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.581+0000 7ff2f5b2f900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T12:42:22.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.581+0000 7ff2f5b2f900 5 note: GC not initialized 2026-03-20T12:42:22.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.582+0000 7ff29b7fe640 20 reqs_thread_entry: start 2026-03-20T12:42:22.624 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.624+0000 7ff2f5b2f900 20 init_complete bucket index max shards: 11 2026-03-20T12:42:22.624 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.624+0000 7ff2f5b2f900 20 Filter name: none 2026-03-20T12:42:22.624 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.624+0000 7ff2997fa640 20 reqs_thread_entry: start 2026-03-20T12:42:22.624 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.624+0000 7ff2f5b2f900 10 cache get: name=default.rgw.meta+users.uid+foo.client.0 : miss 2026-03-20T12:42:22.624 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.624+0000 7ff2f5b2f900 20 rados->read ofs=0 len=0 2026-03-20T12:42:22.624 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.624+0000 7ff2f5b2f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:42:22.624 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.624+0000 7ff2f5b2f900 10 cache put: name=default.rgw.meta+users.uid+foo.client.0 info.flags=0x0 2026-03-20T12:42:22.624 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.624+0000 7ff2f5b2f900 10 adding default.rgw.meta+users.uid+foo.client.0 to cache LRU end 2026-03-20T12:42:22.624 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.624+0000 7ff2f5b2f900 10 cache get: name=default.rgw.meta+users.keys+MESNBKHWOVPBVNINOGEU : miss 2026-03-20T12:42:22.624 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.624+0000 7ff2f5b2f900 20 rados->read ofs=0 len=0 2026-03-20T12:42:22.624 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.625+0000 7ff2f5b2f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T12:42:22.625 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.625+0000 7ff2f5b2f900 10 cache put: name=default.rgw.meta+users.keys+MESNBKHWOVPBVNINOGEU info.flags=0x0 2026-03-20T12:42:22.625 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.625+0000 7ff2f5b2f900 10 adding default.rgw.meta+users.keys+MESNBKHWOVPBVNINOGEU to cache LRU end 2026-03-20T12:42:22.625 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.625+0000 7ff2f5b2f900 10 cache get: name=default.rgw.meta+users.keys+MESNBKHWOVPBVNINOGEU : hit (negative entry) 2026-03-20T12:42:22.625 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.625+0000 7ff2f5b2f900 10 cache get: name=default.rgw.meta+users.keys+MESNBKHWOVPBVNINOGEU : hit (negative entry) 2026-03-20T12:42:22.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.626+0000 7ff2f5b2f900 10 cache put: name=default.rgw.meta+users.uid+foo.client.0 info.flags=0x17 2026-03-20T12:42:22.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.626+0000 7ff2f5b2f900 10 moving default.rgw.meta+users.uid+foo.client.0 to cache LRU end 2026-03-20T12:42:22.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.626+0000 7ff2f5b2f900 10 distributing notification oid=default.rgw.control:notify.0 cni=[op: 0, obj: default.rgw.meta:users.uid:foo.client.0, ofs0, ns] 2026-03-20T12:42:22.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.626+0000 7ff2cd7fa640 10 rgw watcher librados: RGWWatcher::handle_notify() notify_id 171798691840 cookie 93958688337424 notifier 4667 bl.length()=628 2026-03-20T12:42:22.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.626+0000 7ff2cd7fa640 10 rgw watcher librados: cache put: name=default.rgw.meta+users.uid+foo.client.0 info.flags=0x17 2026-03-20T12:42:22.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.626+0000 7ff2cd7fa640 10 rgw watcher librados: moving default.rgw.meta+users.uid+foo.client.0 to cache LRU end 2026-03-20T12:42:22.628 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.628+0000 7ff2f5b2f900 10 cache put: name=default.rgw.meta+users.keys+MESNBKHWOVPBVNINOGEU info.flags=0x7 2026-03-20T12:42:22.628 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.628+0000 7ff2f5b2f900 10 moving default.rgw.meta+users.keys+MESNBKHWOVPBVNINOGEU to cache LRU end 2026-03-20T12:42:22.628 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.628+0000 7ff2f5b2f900 10 distributing notification oid=default.rgw.control:notify.1 cni=[op: 0, obj: default.rgw.meta:users.keys:MESNBKHWOVPBVNINOGEU, ofs0, ns] 2026-03-20T12:42:22.628 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.628+0000 7ff2cd7fa640 10 rgw watcher librados: RGWWatcher::handle_notify() notify_id 171798691840 cookie 93958688355344 notifier 4667 bl.length()=186 2026-03-20T12:42:22.628 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.628+0000 7ff2cd7fa640 10 rgw watcher librados: cache put: name=default.rgw.meta+users.keys+MESNBKHWOVPBVNINOGEU info.flags=0x7 2026-03-20T12:42:22.628 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.628+0000 7ff2cd7fa640 10 rgw watcher librados: moving default.rgw.meta+users.keys+MESNBKHWOVPBVNINOGEU to cache LRU end 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "user_id": "foo.client.0", 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "display_name": "Mr. foo.client.0", 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "email": "", 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "suspended": 0, 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "max_buckets": 1000, 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "subusers": [], 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "keys": [ 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: { 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "user": "foo.client.0", 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "access_key": "MESNBKHWOVPBVNINOGEU", 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "secret_key": "MGGRRLRWdcLApHZfqklt0MXnMV5JYRgLOU16EH0O9gKC2QPMR+kiAg==", 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "active": true, 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "create_date": "2026-03-20T12:42:22.625670Z" 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: ], 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "swift_keys": [], 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "caps": [], 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "op_mask": "read, write, delete", 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "default_placement": "", 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "default_storage_class": "", 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "placement_tags": [], 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "bucket_quota": { 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "enabled": false, 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "check_on_raw": false, 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "max_size": -1, 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "max_size_kb": 0, 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "max_objects": -1 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "user_quota": { 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "enabled": false, 2026-03-20T12:42:22.629 INFO:teuthology.orchestra.run.vm00.stdout: "check_on_raw": false, 2026-03-20T12:42:22.630 INFO:teuthology.orchestra.run.vm00.stdout: "max_size": -1, 2026-03-20T12:42:22.630 INFO:teuthology.orchestra.run.vm00.stdout: "max_size_kb": 0, 2026-03-20T12:42:22.630 INFO:teuthology.orchestra.run.vm00.stdout: "max_objects": -1 2026-03-20T12:42:22.630 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-20T12:42:22.630 INFO:teuthology.orchestra.run.vm00.stdout: "temp_url_keys": [], 2026-03-20T12:42:22.630 INFO:teuthology.orchestra.run.vm00.stdout: "type": "rgw", 2026-03-20T12:42:22.630 INFO:teuthology.orchestra.run.vm00.stdout: "mfa_ids": [], 2026-03-20T12:42:22.630 INFO:teuthology.orchestra.run.vm00.stdout: "account_id": "", 2026-03-20T12:42:22.630 INFO:teuthology.orchestra.run.vm00.stdout: "path": "/", 2026-03-20T12:42:22.630 INFO:teuthology.orchestra.run.vm00.stdout: "create_date": "2026-03-20T12:42:22.625660Z", 2026-03-20T12:42:22.630 INFO:teuthology.orchestra.run.vm00.stdout: "tags": [], 2026-03-20T12:42:22.630 INFO:teuthology.orchestra.run.vm00.stdout: "group_ids": [] 2026-03-20T12:42:22.630 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-20T12:42:22.630 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:42:22.633 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.633+0000 7ff2f5b2f900 20 remove_watcher() i=5 2026-03-20T12:42:22.633 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.633+0000 7ff2f5b2f900 2 removed watcher, disabling cache 2026-03-20T12:42:22.633 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.633+0000 7ff2f5b2f900 20 remove_watcher() i=0 2026-03-20T12:42:22.633 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.633+0000 7ff2f5b2f900 20 remove_watcher() i=3 2026-03-20T12:42:22.633 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.633+0000 7ff2f5b2f900 20 remove_watcher() i=1 2026-03-20T12:42:22.633 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.633+0000 7ff2f5b2f900 20 remove_watcher() i=2 2026-03-20T12:42:22.633 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.633+0000 7ff2f5b2f900 20 remove_watcher() i=4 2026-03-20T12:42:22.633 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.634+0000 7ff2f5b2f900 20 remove_watcher() i=7 2026-03-20T12:42:22.634 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T12:42:22.634+0000 7ff2f5b2f900 20 remove_watcher() i=6 2026-03-20T12:42:22.639 INFO:tasks.dedup_tests:Configuring dedup-tests... 2026-03-20T12:42:22.639 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T12:42:22.639 DEBUG:teuthology.orchestra.run.vm00:> dd of=/home/ubuntu/cephtest/ceph/src/test/rgw/dedup/deduptests.client.0.conf 2026-03-20T12:42:22.762 INFO:tasks.dedup_tests:Running dedup-tests... 2026-03-20T12:42:22.762 DEBUG:teuthology.orchestra.run.vm00:dedup tests against rgw> source /home/ubuntu/cephtest/tox-venv/bin/activate && cd /home/ubuntu/cephtest/ceph/src/test/rgw/dedup/ && DEDUPTESTS_CONF=./deduptests.client.0.conf tox -- -v -m 'basic_test or request_test or example_test' 2026-03-20T12:42:23.138 INFO:teuthology.orchestra.run.vm00.stdout:py: install_deps> python -I -m pip install -r requirements.txt 2026-03-20T12:42:25.813 INFO:teuthology.orchestra.run.vm00.stdout:py: commands[0]> pytest -v -m 'basic_test or request_test or example_test' 2026-03-20T12:42:25.919 INFO:teuthology.orchestra.run.vm00.stdout:============================= test session starts ============================== 2026-03-20T12:42:25.919 INFO:teuthology.orchestra.run.vm00.stdout:platform linux -- Python 3.11.15, pytest-9.0.2, pluggy-1.6.0 -- /home/ubuntu/cephtest/ceph/src/test/rgw/dedup/.tox/py/bin/python 2026-03-20T12:42:25.919 INFO:teuthology.orchestra.run.vm00.stdout:cachedir: .tox/py/.pytest_cache 2026-03-20T12:42:25.919 INFO:teuthology.orchestra.run.vm00.stdout:rootdir: /home/ubuntu/cephtest/ceph/src/test/rgw/dedup 2026-03-20T12:42:25.919 INFO:teuthology.orchestra.run.vm00.stdout:configfile: pytest.ini 2026-03-20T12:42:26.030 INFO:teuthology.orchestra.run.vm00.stdout:collecting ... collected 34 items 2026-03-20T12:42:26.030 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T12:42:26.156 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_etag_corruption PASSED [ 2%] 2026-03-20T12:42:26.156 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_md5_collisions PASSED [ 5%] 2026-03-20T12:42:26.157 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_small PASSED [ 8%] 2026-03-20T12:42:26.157 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_small_with_tenants PASSED [ 11%] 2026-03-20T12:42:26.157 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_0_with_tenants PASSED [ 14%] 2026-03-20T12:42:26.157 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_0 PASSED [ 17%] 2026-03-20T12:42:26.158 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_1_with_tenants PASSED [ 20%] 2026-03-20T12:42:26.158 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_1 PASSED [ 23%] 2026-03-20T12:42:26.158 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_2_with_tenants PASSED [ 26%] 2026-03-20T12:42:26.158 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_2 PASSED [ 29%] 2026-03-20T12:42:26.159 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_with_remove_multi_tenants PASSED [ 32%] 2026-03-20T12:42:26.159 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_with_remove PASSED [ 35%] 2026-03-20T12:42:26.159 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_multipart_with_tenants PASSED [ 38%] 2026-03-20T12:42:26.159 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_multipart PASSED [ 41%] 2026-03-20T12:42:26.160 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_basic_with_tenants PASSED [ 44%] 2026-03-20T12:42:26.160 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_basic PASSED [ 47%] 2026-03-20T12:42:26.160 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_small_multipart_with_tenants PASSED [ 50%] 2026-03-20T12:42:26.160 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_small_multipart PASSED [ 52%] 2026-03-20T12:42:26.160 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_large_scale_with_tenants PASSED [ 55%] 2026-03-20T12:42:26.161 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_large_scale PASSED [ 58%] 2026-03-20T12:42:26.161 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_empty_bucket PASSED [ 61%] 2026-03-20T12:42:26.161 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_loop_with_tenants PASSED [ 64%] 2026-03-20T12:42:32.749 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small_with_tenants 2026-03-20T12:42:32.749 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T12:42:32.749 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-03-20T12:42:33.288 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 67%] 2026-03-20T12:44:52.207 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_multipart 2026-03-20T12:44:52.207 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T12:44:52.207 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-03-20T12:44:57.122 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 70%] 2026-03-20T12:45:05.469 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_basic 2026-03-20T12:45:05.469 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T12:45:05.469 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-03-20T12:45:05.992 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 73%] 2026-03-20T12:45:16.433 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small_multipart 2026-03-20T12:45:16.433 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T12:45:16.433 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-03-20T12:45:16.959 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 76%] 2026-03-20T12:45:22.987 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small 2026-03-20T12:45:22.987 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T12:45:22.987 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-03-20T12:45:23.468 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 79%] 2026-03-20T12:45:39.173 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small_large_mix 2026-03-20T12:45:39.174 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T12:45:39.174 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-03-20T12:45:40.390 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 82%] 2026-03-20T12:45:57.322 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_basic_with_tenants 2026-03-20T12:45:57.322 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T12:45:57.322 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-03-20T12:45:58.519 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 85%] 2026-03-20T12:46:58.331 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_multipart_with_tenants 2026-03-20T12:46:58.332 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T12:46:58.332 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-03-20T12:47:00.816 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 88%] 2026-03-20T12:47:10.260 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small_multipart_with_tenants 2026-03-20T12:47:10.260 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T12:47:10.260 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-03-20T12:47:10.997 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 91%] 2026-03-20T12:54:08.717 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_large_scale_with_tenants 2026-03-20T12:54:08.717 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T12:54:08.717 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-03-20T12:54:08.717 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1288 [64] obj_count=65565, upload=401(sec), exec=5(sec), verify=0(sec) 2026-03-20T12:54:33.076 INFO:tasks.ceph.mon.a.vm00.stderr:2026-03-20T12:54:33.076+0000 7fc8432a3640 -1 log_channel(cluster) log [ERR] : Health check failed: mon c is very low on available space (MON_DISK_CRIT) 2026-03-20T12:54:39.029 INFO:tasks.ceph.mon.a.vm00.stderr:2026-03-20T12:54:39.029+0000 7fc845aa8640 -1 log_channel(cluster) log [ERR] : Health check update: mons a,c are very low on available space (MON_DISK_CRIT) 2026-03-20T12:56:01.385 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 94%] 2026-03-20T12:56:14.592 INFO:tasks.ceph.mon.a.vm00.stderr:2026-03-20T12:56:14.592+0000 7fc845aa8640 -1 log_channel(cluster) log [ERR] : Health check update: mons a,b,c are very low on available space (MON_DISK_CRIT) 2026-03-20T12:56:24.876 INFO:tasks.rgw.client.0.vm00.stdout:problem writing to /var/log/ceph/rgw.ceph.client.0.log: tee: /var/log/ceph/rgw.ceph.client.0.stdout: No space left on device 2026-03-20T12:56:24.876 INFO:tasks.rgw.client.0.vm00.stdout:(28) No space left on device 2026-03-20T12:56:24.876 INFO:tasks.ceph.osd.2.vm00.stderr:problem writing to /var/log/ceph/ceph-osd.2.log: (28) No space left on device 2026-03-20T12:56:24.876 INFO:tasks.ceph.osd.3.vm00.stderr:problem writing to /var/log/ceph/ceph-osd.3.log: (28) No space left on device 2026-03-20T12:56:24.876 INFO:tasks.ceph.osd.0.vm00.stderr:problem writing to /var/log/ceph/ceph-osd.0.log: (28) No space left on device 2026-03-20T12:56:24.876 INFO:tasks.ceph.osd.1.vm00.stderr:problem writing to /var/log/ceph/ceph-osd.1.log: (28) No space left on device 2026-03-20T12:56:25.373 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:56:25.497 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:56:28.477 INFO:tasks.ceph.mgr.y.vm00.stderr:problem writing to /var/log/ceph/ceph-mgr.y.log: (28) No space left on device 2026-03-20T12:56:28.980 INFO:tasks.rgw.client.1.vm06.stdout:2026-03-20T12:56:28.979+0000 7f23d9002640 -1 restore: virtual void* rgw::restore::Restore::RestoreWorker::entry(): ERROR: restore process() returned error r=-16 2026-03-20T12:57:39.258 INFO:tasks.ceph.osd.5.vm06.stderr:problem writing to /var/log/ceph/ceph-osd.5.log: (28) No space left on device 2026-03-20T12:57:39.259 INFO:tasks.ceph.osd.7.vm06.stderr:problem writing to /var/log/ceph/ceph-osd.7.log: (28) No space left on device 2026-03-20T12:57:39.259 INFO:tasks.ceph.osd.4.vm06.stderr:problem writing to /var/log/ceph/ceph-osd.4.log: (28) No space left on device 2026-03-20T12:57:39.260 INFO:tasks.ceph.osd.4.vm06.stderr:problem writing to /var/log/ceph/ceph-osd.4.log: (28) No space left on device 2026-03-20T12:57:39.277 INFO:tasks.ceph.osd.4.vm06.stderr:problem writing to /var/log/ceph/ceph-osd.4.log: (28) No space left on device 2026-03-20T12:57:39.283 INFO:tasks.ceph.osd.6.vm06.stderr:problem writing to /var/log/ceph/ceph-osd.6.log: (28) No space left on device 2026-03-20T12:57:39.290 INFO:tasks.ceph.mgr.x.vm06.stderr:problem writing to /var/log/ceph/ceph-mgr.x.log: (28) No space left on device 2026-03-20T12:57:39.290 INFO:tasks.ceph.osd.5.vm06.stderr:problem writing to /var/log/ceph/ceph-osd.5.log: (28) No space left on device 2026-03-20T12:57:39.292 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:49.319 INFO:tasks.ceph.mon.a.vm00.stderr:2026-03-20T12:57:49.319+0000 7fc845aa8640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-a/store.db/000022.log: No space left on device code =  Rocksdb transaction: 2026-03-20T12:57:49.319 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = '1131' value size = 611) 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = 'pending_v' value size = 8) 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = 'pending_pn' value size = 8) 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7fc845aa8640 time 2026-03-20T12:57:49.320277+0000 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7fc84b9911f3] 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr: 2: ceph-mon(+0x2a69bc) [0x563967a229bc] 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr: 3: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x563967ba6d8c] 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr: 4: (Paxos::propose_pending()+0x11b) [0x563967bb2dab] 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (Paxos::trigger_propose()+0x118) [0x563967bb31a8] 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr: 6: (PaxosService::propose_pending()+0x24f) [0x563967bb35bf] 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr: 7: ceph-mon(+0x2a6c5d) [0x563967a22c5d] 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (CommonSafeTimer::timer_thread()+0x130) [0x7fc84baddde0] 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr: 9: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7fc84bade841] 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr: 10: /lib64/libc.so.6(+0x8b2fa) [0x7fc84aa8b2fa] 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr: 11: /lib64/libc.so.6(+0x1103d0) [0x7fc84ab103d0] 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr:2026-03-20T12:57:49.320+0000 7fc845aa8640 -1 /ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7fc845aa8640 time 2026-03-20T12:57:49.320277+0000 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7fc84b9911f3] 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr: 2: ceph-mon(+0x2a69bc) [0x563967a229bc] 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr: 3: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x563967ba6d8c] 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr: 4: (Paxos::propose_pending()+0x11b) [0x563967bb2dab] 2026-03-20T12:57:49.320 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (Paxos::trigger_propose()+0x118) [0x563967bb31a8] 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 6: (PaxosService::propose_pending()+0x24f) [0x563967bb35bf] 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 7: ceph-mon(+0x2a6c5d) [0x563967a22c5d] 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (CommonSafeTimer::timer_thread()+0x130) [0x7fc84baddde0] 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 9: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7fc84bade841] 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 10: /lib64/libc.so.6(+0x8b2fa) [0x7fc84aa8b2fa] 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 11: /lib64/libc.so.6(+0x1103d0) [0x7fc84ab103d0] 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr:*** Caught signal (Aborted) ** 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: in thread 7fc845aa8640 thread_name:safe_timer 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7fc84aa3fc30] 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7fc84aa8d03c] 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 3: raise() 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 4: abort() 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7fc84b9912b0] 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 6: ceph-mon(+0x2a69bc) [0x563967a229bc] 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 7: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x563967ba6d8c] 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (Paxos::propose_pending()+0x11b) [0x563967bb2dab] 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 9: (Paxos::trigger_propose()+0x118) [0x563967bb31a8] 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 10: (PaxosService::propose_pending()+0x24f) [0x563967bb35bf] 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 11: ceph-mon(+0x2a6c5d) [0x563967a22c5d] 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 12: (CommonSafeTimer::timer_thread()+0x130) [0x7fc84baddde0] 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 13: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7fc84bade841] 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 14: /lib64/libc.so.6(+0x8b2fa) [0x7fc84aa8b2fa] 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr: 15: /lib64/libc.so.6(+0x1103d0) [0x7fc84ab103d0] 2026-03-20T12:57:49.321 INFO:tasks.ceph.mon.a.vm00.stderr:2026-03-20T12:57:49.321+0000 7fc845aa8640 -1 *** Caught signal (Aborted) ** 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr: in thread 7fc845aa8640 thread_name:safe_timer 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7fc84aa3fc30] 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7fc84aa8d03c] 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr: 3: raise() 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr: 4: abort() 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7fc84b9912b0] 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr: 6: ceph-mon(+0x2a69bc) [0x563967a229bc] 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr: 7: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x563967ba6d8c] 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (Paxos::propose_pending()+0x11b) [0x563967bb2dab] 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr: 9: (Paxos::trigger_propose()+0x118) [0x563967bb31a8] 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr: 10: (PaxosService::propose_pending()+0x24f) [0x563967bb35bf] 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr: 11: ceph-mon(+0x2a6c5d) [0x563967a22c5d] 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr: 12: (CommonSafeTimer::timer_thread()+0x130) [0x7fc84baddde0] 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr: 13: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7fc84bade841] 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr: 14: /lib64/libc.so.6(+0x8b2fa) [0x7fc84aa8b2fa] 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr: 15: /lib64/libc.so.6(+0x1103d0) [0x7fc84ab103d0] 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T12:57:49.322 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: -2> 2026-03-20T12:57:49.319+0000 7fc845aa8640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-a/store.db/000022.log: No space left on device code =  Rocksdb transaction: 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = '1131' value size = 611) 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = 'pending_v' value size = 8) 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = 'pending_pn' value size = 8) 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: -1> 2026-03-20T12:57:49.320+0000 7fc845aa8640 -1 /ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7fc845aa8640 time 2026-03-20T12:57:49.320277+0000 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7fc84b9911f3] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 2: ceph-mon(+0x2a69bc) [0x563967a229bc] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 3: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x563967ba6d8c] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 4: (Paxos::propose_pending()+0x11b) [0x563967bb2dab] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (Paxos::trigger_propose()+0x118) [0x563967bb31a8] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 6: (PaxosService::propose_pending()+0x24f) [0x563967bb35bf] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 7: ceph-mon(+0x2a6c5d) [0x563967a22c5d] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (CommonSafeTimer::timer_thread()+0x130) [0x7fc84baddde0] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 9: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7fc84bade841] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 10: /lib64/libc.so.6(+0x8b2fa) [0x7fc84aa8b2fa] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 11: /lib64/libc.so.6(+0x1103d0) [0x7fc84ab103d0] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 0> 2026-03-20T12:57:49.321+0000 7fc845aa8640 -1 *** Caught signal (Aborted) ** 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: in thread 7fc845aa8640 thread_name:safe_timer 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7fc84aa3fc30] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7fc84aa8d03c] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 3: raise() 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 4: abort() 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7fc84b9912b0] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 6: ceph-mon(+0x2a69bc) [0x563967a229bc] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 7: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x563967ba6d8c] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (Paxos::propose_pending()+0x11b) [0x563967bb2dab] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 9: (Paxos::trigger_propose()+0x118) [0x563967bb31a8] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 10: (PaxosService::propose_pending()+0x24f) [0x563967bb35bf] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 11: ceph-mon(+0x2a6c5d) [0x563967a22c5d] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 12: (CommonSafeTimer::timer_thread()+0x130) [0x7fc84baddde0] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 13: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7fc84bade841] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 14: /lib64/libc.so.6(+0x8b2fa) [0x7fc84aa8b2fa] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 15: /lib64/libc.so.6(+0x1103d0) [0x7fc84ab103d0] 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-20T12:57:49.337 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T12:57:49.338 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.338 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.338 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.338 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.338 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.338 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.338 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.338 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.338 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.338 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.338 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.338 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.338 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.338 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.339 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.339 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.339 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.339 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.339 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.339 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.339 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.339 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.339 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.339 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.339 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.339 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.339 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.339 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.339 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.339 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.339 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.339 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.339 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.340 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.340 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.340 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.340 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.340 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.340 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.340 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.340 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.340 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.340 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.340 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.340 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.340 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.340 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.340 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.340 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.340 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.340 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.340 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.342 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.342 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.342 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.342 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.342 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.342 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.342 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.342 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.342 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.342 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.342 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.342 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.342 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.342 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.342 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr: -9999> 2026-03-20T12:57:49.319+0000 7fc845aa8640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-a/store.db/000022.log: No space left on device code =  Rocksdb transaction: 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = '1131' value size = 611) 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = 'pending_v' value size = 8) 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = 'pending_pn' value size = 8) 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr: -9998> 2026-03-20T12:57:49.320+0000 7fc845aa8640 -1 /ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7fc845aa8640 time 2026-03-20T12:57:49.320277+0000 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T12:57:49.343 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7fc84b9911f3] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 2: ceph-mon(+0x2a69bc) [0x563967a229bc] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 3: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x563967ba6d8c] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 4: (Paxos::propose_pending()+0x11b) [0x563967bb2dab] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (Paxos::trigger_propose()+0x118) [0x563967bb31a8] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 6: (PaxosService::propose_pending()+0x24f) [0x563967bb35bf] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 7: ceph-mon(+0x2a6c5d) [0x563967a22c5d] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (CommonSafeTimer::timer_thread()+0x130) [0x7fc84baddde0] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 9: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7fc84bade841] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 10: /lib64/libc.so.6(+0x8b2fa) [0x7fc84aa8b2fa] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 11: /lib64/libc.so.6(+0x1103d0) [0x7fc84ab103d0] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: -9997> 2026-03-20T12:57:49.321+0000 7fc845aa8640 -1 *** Caught signal (Aborted) ** 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: in thread 7fc845aa8640 thread_name:safe_timer 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7fc84aa3fc30] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7fc84aa8d03c] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 3: raise() 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 4: abort() 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7fc84b9912b0] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 6: ceph-mon(+0x2a69bc) [0x563967a229bc] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 7: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x563967ba6d8c] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (Paxos::propose_pending()+0x11b) [0x563967bb2dab] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 9: (Paxos::trigger_propose()+0x118) [0x563967bb31a8] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 10: (PaxosService::propose_pending()+0x24f) [0x563967bb35bf] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 11: ceph-mon(+0x2a6c5d) [0x563967a22c5d] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 12: (CommonSafeTimer::timer_thread()+0x130) [0x7fc84baddde0] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 13: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7fc84bade841] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 14: /lib64/libc.so.6(+0x8b2fa) [0x7fc84aa8b2fa] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 15: /lib64/libc.so.6(+0x1103d0) [0x7fc84ab103d0] 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-20T12:57:49.344 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T12:57:49.397 INFO:tasks.ceph.mon.a.vm00.stderr:daemon-helper: command crashed with signal 6 2026-03-20T12:57:51.361 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~0s 2026-03-20T12:57:51.844 INFO:tasks.ceph.mon.b.vm06.stderr:2026-03-20T12:57:51.843+0000 7f986a146640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-b/store.db/000022.log: No space left on device code =  Rocksdb transaction: 2026-03-20T12:57:51.845 INFO:tasks.ceph.mon.b.vm06.stderr:PutCF( prefix = monitor key = 'connectivity_scores' value size = 238) 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f986a146640 time 2026-03-20T12:57:51.845982+0000 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f98727911f3] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 2: ceph-mon(+0x2a69bc) [0x5572fb7c79bc] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 3: (Elector::persist_connectivity_scores()+0x135) [0x5572fb8ab175] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 4: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x5572fb8b4211] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 5: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x5572fb8b05e0] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 6: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x5572fb8b1197] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 7: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x5572fb82073d] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 8: (Monitor::_ms_dispatch(Message*)+0x786) [0x5572fb814ec6] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 9: ceph-mon(+0x2b3b8c) [0x5572fb7d4b8c] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 10: (DispatchQueue::entry()+0x4a8) [0x7f9872a08518] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 11: /usr/lib64/ceph/libceph-common.so.2(+0x49cc11) [0x7f9872a9cc11] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 12: /lib64/libc.so.6(+0x8b2fa) [0x7f987188b2fa] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 13: /lib64/libc.so.6(+0x1103d0) [0x7f98719103d0] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr:2026-03-20T12:57:51.844+0000 7f986a146640 -1 /ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f986a146640 time 2026-03-20T12:57:51.845982+0000 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f98727911f3] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 2: ceph-mon(+0x2a69bc) [0x5572fb7c79bc] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 3: (Elector::persist_connectivity_scores()+0x135) [0x5572fb8ab175] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 4: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x5572fb8b4211] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 5: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x5572fb8b05e0] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 6: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x5572fb8b1197] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 7: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x5572fb82073d] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 8: (Monitor::_ms_dispatch(Message*)+0x786) [0x5572fb814ec6] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 9: ceph-mon(+0x2b3b8c) [0x5572fb7d4b8c] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 10: (DispatchQueue::entry()+0x4a8) [0x7f9872a08518] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 11: /usr/lib64/ceph/libceph-common.so.2(+0x49cc11) [0x7f9872a9cc11] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 12: /lib64/libc.so.6(+0x8b2fa) [0x7f987188b2fa] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 13: /lib64/libc.so.6(+0x1103d0) [0x7f98719103d0] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr:*** Caught signal (Aborted) ** 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: in thread 7f986a146640 thread_name:ms_dispatch 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f987183fc30] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7f987188d03c] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 3: raise() 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 4: abort() 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f98727912b0] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 6: ceph-mon(+0x2a69bc) [0x5572fb7c79bc] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 7: (Elector::persist_connectivity_scores()+0x135) [0x5572fb8ab175] 2026-03-20T12:57:51.846 INFO:tasks.ceph.mon.b.vm06.stderr: 8: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x5572fb8b4211] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 9: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x5572fb8b05e0] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 10: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x5572fb8b1197] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 11: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x5572fb82073d] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 12: (Monitor::_ms_dispatch(Message*)+0x786) [0x5572fb814ec6] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 13: ceph-mon(+0x2b3b8c) [0x5572fb7d4b8c] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 14: (DispatchQueue::entry()+0x4a8) [0x7f9872a08518] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 15: /usr/lib64/ceph/libceph-common.so.2(+0x49cc11) [0x7f9872a9cc11] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 16: /lib64/libc.so.6(+0x8b2fa) [0x7f987188b2fa] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 17: /lib64/libc.so.6(+0x1103d0) [0x7f98719103d0] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr:2026-03-20T12:57:51.845+0000 7f986a146640 -1 *** Caught signal (Aborted) ** 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: in thread 7f986a146640 thread_name:ms_dispatch 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f987183fc30] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7f987188d03c] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 3: raise() 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 4: abort() 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f98727912b0] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 6: ceph-mon(+0x2a69bc) [0x5572fb7c79bc] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 7: (Elector::persist_connectivity_scores()+0x135) [0x5572fb8ab175] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 8: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x5572fb8b4211] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 9: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x5572fb8b05e0] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 10: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x5572fb8b1197] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 11: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x5572fb82073d] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 12: (Monitor::_ms_dispatch(Message*)+0x786) [0x5572fb814ec6] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 13: ceph-mon(+0x2b3b8c) [0x5572fb7d4b8c] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 14: (DispatchQueue::entry()+0x4a8) [0x7f9872a08518] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 15: /usr/lib64/ceph/libceph-common.so.2(+0x49cc11) [0x7f9872a9cc11] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 16: /lib64/libc.so.6(+0x8b2fa) [0x7f987188b2fa] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 17: /lib64/libc.so.6(+0x1103d0) [0x7f98719103d0] 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-20T12:57:51.847 INFO:tasks.ceph.mon.b.vm06.stderr: 2026-03-20T12:57:51.848 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: -2> 2026-03-20T12:57:51.843+0000 7f986a146640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-b/store.db/000022.log: No space left on device code =  Rocksdb transaction: 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr:PutCF( prefix = monitor key = 'connectivity_scores' value size = 238) 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: -1> 2026-03-20T12:57:51.844+0000 7f986a146640 -1 /ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f986a146640 time 2026-03-20T12:57:51.845982+0000 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f98727911f3] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 2: ceph-mon(+0x2a69bc) [0x5572fb7c79bc] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 3: (Elector::persist_connectivity_scores()+0x135) [0x5572fb8ab175] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 4: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x5572fb8b4211] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 5: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x5572fb8b05e0] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 6: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x5572fb8b1197] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 7: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x5572fb82073d] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 8: (Monitor::_ms_dispatch(Message*)+0x786) [0x5572fb814ec6] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 9: ceph-mon(+0x2b3b8c) [0x5572fb7d4b8c] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 10: (DispatchQueue::entry()+0x4a8) [0x7f9872a08518] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 11: /usr/lib64/ceph/libceph-common.so.2(+0x49cc11) [0x7f9872a9cc11] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 12: /lib64/libc.so.6(+0x8b2fa) [0x7f987188b2fa] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 13: /lib64/libc.so.6(+0x1103d0) [0x7f98719103d0] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 0> 2026-03-20T12:57:51.845+0000 7f986a146640 -1 *** Caught signal (Aborted) ** 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: in thread 7f986a146640 thread_name:ms_dispatch 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f987183fc30] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7f987188d03c] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 3: raise() 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 4: abort() 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f98727912b0] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 6: ceph-mon(+0x2a69bc) [0x5572fb7c79bc] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 7: (Elector::persist_connectivity_scores()+0x135) [0x5572fb8ab175] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 8: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x5572fb8b4211] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 9: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x5572fb8b05e0] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 10: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x5572fb8b1197] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 11: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x5572fb82073d] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 12: (Monitor::_ms_dispatch(Message*)+0x786) [0x5572fb814ec6] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 13: ceph-mon(+0x2b3b8c) [0x5572fb7d4b8c] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 14: (DispatchQueue::entry()+0x4a8) [0x7f9872a08518] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 15: /usr/lib64/ceph/libceph-common.so.2(+0x49cc11) [0x7f9872a9cc11] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 16: /lib64/libc.so.6(+0x8b2fa) [0x7f987188b2fa] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 17: /lib64/libc.so.6(+0x1103d0) [0x7f98719103d0] 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-20T12:57:51.858 INFO:tasks.ceph.mon.b.vm06.stderr: 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.859 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.860 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.860 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.860 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.860 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.860 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.860 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.861 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.862 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.863 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.863 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.863 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.863 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.863 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.863 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.863 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.863 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.863 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.863 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.863 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.863 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.863 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.863 INFO:tasks.ceph.mon.b.vm06.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: -9999> 2026-03-20T12:57:51.843+0000 7f986a146640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-b/store.db/000022.log: No space left on device code =  Rocksdb transaction: 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr:PutCF( prefix = monitor key = 'connectivity_scores' value size = 238) 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: -9998> 2026-03-20T12:57:51.844+0000 7f986a146640 -1 /ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f986a146640 time 2026-03-20T12:57:51.845982+0000 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f98727911f3] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 2: ceph-mon(+0x2a69bc) [0x5572fb7c79bc] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 3: (Elector::persist_connectivity_scores()+0x135) [0x5572fb8ab175] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 4: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x5572fb8b4211] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 5: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x5572fb8b05e0] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 6: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x5572fb8b1197] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 7: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x5572fb82073d] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 8: (Monitor::_ms_dispatch(Message*)+0x786) [0x5572fb814ec6] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 9: ceph-mon(+0x2b3b8c) [0x5572fb7d4b8c] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 10: (DispatchQueue::entry()+0x4a8) [0x7f9872a08518] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 11: /usr/lib64/ceph/libceph-common.so.2(+0x49cc11) [0x7f9872a9cc11] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 12: /lib64/libc.so.6(+0x8b2fa) [0x7f987188b2fa] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 13: /lib64/libc.so.6(+0x1103d0) [0x7f98719103d0] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: -9997> 2026-03-20T12:57:51.845+0000 7f986a146640 -1 *** Caught signal (Aborted) ** 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: in thread 7f986a146640 thread_name:ms_dispatch 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f987183fc30] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7f987188d03c] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 3: raise() 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 4: abort() 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f98727912b0] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 6: ceph-mon(+0x2a69bc) [0x5572fb7c79bc] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 7: (Elector::persist_connectivity_scores()+0x135) [0x5572fb8ab175] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 8: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x5572fb8b4211] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 9: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x5572fb8b05e0] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 10: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x5572fb8b1197] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 11: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x5572fb82073d] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 12: (Monitor::_ms_dispatch(Message*)+0x786) [0x5572fb814ec6] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 13: ceph-mon(+0x2b3b8c) [0x5572fb7d4b8c] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 14: (DispatchQueue::entry()+0x4a8) [0x7f9872a08518] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 15: /usr/lib64/ceph/libceph-common.so.2(+0x49cc11) [0x7f9872a9cc11] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 16: /lib64/libc.so.6(+0x8b2fa) [0x7f987188b2fa] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 17: /lib64/libc.so.6(+0x1103d0) [0x7f98719103d0] 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-20T12:57:51.864 INFO:tasks.ceph.mon.b.vm06.stderr: 2026-03-20T12:57:51.943 INFO:tasks.ceph.mon.b.vm06.stderr:daemon-helper: command crashed with signal 6 2026-03-20T12:57:57.774 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~7s 2026-03-20T12:57:57.774 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~0s 2026-03-20T12:57:57.808 INFO:tasks.ceph.mon.c.vm00.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f5644a75640 time 2026-03-20T12:57:57.808622+0000 2026-03-20T12:57:57.808 INFO:tasks.ceph.mon.c.vm00.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f564a9911f3] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 2: ceph-mon(+0x2a69bc) [0x562030aa69bc] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 3: (Elector::persist_connectivity_scores()+0x135) [0x562030b8a175] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 4: (ConnectionTracker::report_dead_connection(int, double)+0x181) [0x562030b939d1] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (Elector::dead_ping(int)+0x1a1) [0x562030b8b4a1] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 6: ceph-mon(+0x2a6c5d) [0x562030aa6c5d] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (CommonSafeTimer::timer_thread()+0x130) [0x7f564aaddde0] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 8: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7f564aade841] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 9: /lib64/libc.so.6(+0x8b2fa) [0x7f5649a8b2fa] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 10: /lib64/libc.so.6(+0x1103d0) [0x7f5649b103d0] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr:*** Caught signal (Aborted) ** 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: in thread 7f5644a75640 thread_name:safe_timer 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f5649a3fc30] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7f5649a8d03c] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 3: raise() 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 4: abort() 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f564a9912b0] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 6: ceph-mon(+0x2a69bc) [0x562030aa69bc] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (Elector::persist_connectivity_scores()+0x135) [0x562030b8a175] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 8: (ConnectionTracker::report_dead_connection(int, double)+0x181) [0x562030b939d1] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 9: (Elector::dead_ping(int)+0x1a1) [0x562030b8b4a1] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 10: ceph-mon(+0x2a6c5d) [0x562030aa6c5d] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 11: (CommonSafeTimer::timer_thread()+0x130) [0x7f564aaddde0] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 12: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7f564aade841] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 13: /lib64/libc.so.6(+0x8b2fa) [0x7f5649a8b2fa] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 14: /lib64/libc.so.6(+0x1103d0) [0x7f5649b103d0] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr:2026-03-20T12:57:57.808+0000 7f5644a75640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-c/store.db/000022.log: No space left on device code =  Rocksdb transaction: 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr:PutCF( prefix = monitor key = 'connectivity_scores' value size = 238) 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr:2026-03-20T12:57:57.808+0000 7f5644a75640 -1 /ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f5644a75640 time 2026-03-20T12:57:57.808622+0000 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f564a9911f3] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 2: ceph-mon(+0x2a69bc) [0x562030aa69bc] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 3: (Elector::persist_connectivity_scores()+0x135) [0x562030b8a175] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 4: (ConnectionTracker::report_dead_connection(int, double)+0x181) [0x562030b939d1] 2026-03-20T12:57:57.809 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (Elector::dead_ping(int)+0x1a1) [0x562030b8b4a1] 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 6: ceph-mon(+0x2a6c5d) [0x562030aa6c5d] 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (CommonSafeTimer::timer_thread()+0x130) [0x7f564aaddde0] 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 8: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7f564aade841] 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 9: /lib64/libc.so.6(+0x8b2fa) [0x7f5649a8b2fa] 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 10: /lib64/libc.so.6(+0x1103d0) [0x7f5649b103d0] 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr:2026-03-20T12:57:57.809+0000 7f5644a75640 -1 *** Caught signal (Aborted) ** 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: in thread 7f5644a75640 thread_name:safe_timer 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f5649a3fc30] 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7f5649a8d03c] 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 3: raise() 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 4: abort() 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f564a9912b0] 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 6: ceph-mon(+0x2a69bc) [0x562030aa69bc] 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (Elector::persist_connectivity_scores()+0x135) [0x562030b8a175] 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 8: (ConnectionTracker::report_dead_connection(int, double)+0x181) [0x562030b939d1] 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 9: (Elector::dead_ping(int)+0x1a1) [0x562030b8b4a1] 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 10: ceph-mon(+0x2a6c5d) [0x562030aa6c5d] 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 11: (CommonSafeTimer::timer_thread()+0x130) [0x7f564aaddde0] 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 12: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7f564aade841] 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 13: /lib64/libc.so.6(+0x8b2fa) [0x7f5649a8b2fa] 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 14: /lib64/libc.so.6(+0x1103d0) [0x7f5649b103d0] 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T12:57:57.810 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.833 INFO:tasks.ceph.mon.c.vm00.stderr: -2> 2026-03-20T12:57:57.808+0000 7f5644a75640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-c/store.db/000022.log: No space left on device code =  Rocksdb transaction: 2026-03-20T12:57:57.833 INFO:tasks.ceph.mon.c.vm00.stderr:PutCF( prefix = monitor key = 'connectivity_scores' value size = 238) 2026-03-20T12:57:57.833 INFO:tasks.ceph.mon.c.vm00.stderr: -1> 2026-03-20T12:57:57.808+0000 7f5644a75640 -1 /ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f5644a75640 time 2026-03-20T12:57:57.808622+0000 2026-03-20T12:57:57.833 INFO:tasks.ceph.mon.c.vm00.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T12:57:57.833 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T12:57:57.833 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:57.833 INFO:tasks.ceph.mon.c.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f564a9911f3] 2026-03-20T12:57:57.833 INFO:tasks.ceph.mon.c.vm00.stderr: 2: ceph-mon(+0x2a69bc) [0x562030aa69bc] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 3: (Elector::persist_connectivity_scores()+0x135) [0x562030b8a175] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 4: (ConnectionTracker::report_dead_connection(int, double)+0x181) [0x562030b939d1] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (Elector::dead_ping(int)+0x1a1) [0x562030b8b4a1] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 6: ceph-mon(+0x2a6c5d) [0x562030aa6c5d] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (CommonSafeTimer::timer_thread()+0x130) [0x7f564aaddde0] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 8: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7f564aade841] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 9: /lib64/libc.so.6(+0x8b2fa) [0x7f5649a8b2fa] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 10: /lib64/libc.so.6(+0x1103d0) [0x7f5649b103d0] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 0> 2026-03-20T12:57:57.809+0000 7f5644a75640 -1 *** Caught signal (Aborted) ** 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: in thread 7f5644a75640 thread_name:safe_timer 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f5649a3fc30] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7f5649a8d03c] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 3: raise() 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 4: abort() 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f564a9912b0] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 6: ceph-mon(+0x2a69bc) [0x562030aa69bc] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (Elector::persist_connectivity_scores()+0x135) [0x562030b8a175] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 8: (ConnectionTracker::report_dead_connection(int, double)+0x181) [0x562030b939d1] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 9: (Elector::dead_ping(int)+0x1a1) [0x562030b8b4a1] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 10: ceph-mon(+0x2a6c5d) [0x562030aa6c5d] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 11: (CommonSafeTimer::timer_thread()+0x130) [0x7f564aaddde0] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 12: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7f564aade841] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 13: /lib64/libc.so.6(+0x8b2fa) [0x7f5649a8b2fa] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 14: /lib64/libc.so.6(+0x1103d0) [0x7f5649b103d0] 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-20T12:57:57.834 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T12:57:57.835 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.835 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.835 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.835 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.835 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.835 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.835 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.835 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.835 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.836 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.838 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr: -9999> 2026-03-20T12:57:57.808+0000 7f5644a75640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-c/store.db/000022.log: No space left on device code =  Rocksdb transaction: 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:PutCF( prefix = monitor key = 'connectivity_scores' value size = 238) 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr: -9998> 2026-03-20T12:57:57.808+0000 7f5644a75640 -1 /ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f5644a75640 time 2026-03-20T12:57:57.808622+0000 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f564a9911f3] 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr: 2: ceph-mon(+0x2a69bc) [0x562030aa69bc] 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr: 3: (Elector::persist_connectivity_scores()+0x135) [0x562030b8a175] 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr: 4: (ConnectionTracker::report_dead_connection(int, double)+0x181) [0x562030b939d1] 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (Elector::dead_ping(int)+0x1a1) [0x562030b8b4a1] 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr: 6: ceph-mon(+0x2a6c5d) [0x562030aa6c5d] 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (CommonSafeTimer::timer_thread()+0x130) [0x7f564aaddde0] 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr: 8: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7f564aade841] 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr: 9: /lib64/libc.so.6(+0x8b2fa) [0x7f5649a8b2fa] 2026-03-20T12:57:57.839 INFO:tasks.ceph.mon.c.vm00.stderr: 10: /lib64/libc.so.6(+0x1103d0) [0x7f5649b103d0] 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: -9997> 2026-03-20T12:57:57.809+0000 7f5644a75640 -1 *** Caught signal (Aborted) ** 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: in thread 7f5644a75640 thread_name:safe_timer 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f5649a3fc30] 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7f5649a8d03c] 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: 3: raise() 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: 4: abort() 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f564a9912b0] 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: 6: ceph-mon(+0x2a69bc) [0x562030aa69bc] 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (Elector::persist_connectivity_scores()+0x135) [0x562030b8a175] 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: 8: (ConnectionTracker::report_dead_connection(int, double)+0x181) [0x562030b939d1] 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: 9: (Elector::dead_ping(int)+0x1a1) [0x562030b8b4a1] 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: 10: ceph-mon(+0x2a6c5d) [0x562030aa6c5d] 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: 11: (CommonSafeTimer::timer_thread()+0x130) [0x7f564aaddde0] 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: 12: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7f564aade841] 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: 13: /lib64/libc.so.6(+0x8b2fa) [0x7f5649a8b2fa] 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: 14: /lib64/libc.so.6(+0x1103d0) [0x7f5649b103d0] 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-20T12:57:57.840 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T12:57:58.011 INFO:tasks.ceph.mon.c.vm00.stderr:daemon-helper: command crashed with signal 6 2026-03-20T12:58:04.082 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~13s 2026-03-20T12:58:04.082 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~0s 2026-03-20T12:58:04.082 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~6s 2026-03-20T12:58:10.389 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~19s 2026-03-20T12:58:10.389 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~6s 2026-03-20T12:58:10.389 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~13s 2026-03-20T12:58:16.696 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~26s 2026-03-20T12:58:16.696 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~13s 2026-03-20T12:58:16.697 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~19s 2026-03-20T12:58:23.004 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~32s 2026-03-20T12:58:23.004 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~19s 2026-03-20T12:58:23.004 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~25s 2026-03-20T12:58:29.313 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~38s 2026-03-20T12:58:29.313 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~25s 2026-03-20T12:58:29.313 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~32s 2026-03-20T12:58:35.623 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~44s 2026-03-20T12:58:35.623 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~32s 2026-03-20T12:58:35.623 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~38s 2026-03-20T12:58:41.929 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~51s 2026-03-20T12:58:41.929 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~38s 2026-03-20T12:58:41.929 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~44s 2026-03-20T12:58:48.235 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~57s 2026-03-20T12:58:48.235 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~44s 2026-03-20T12:58:48.235 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~51s 2026-03-20T12:58:54.542 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~63s 2026-03-20T12:58:54.542 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~50s 2026-03-20T12:58:54.542 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~57s 2026-03-20T12:59:00.848 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~70s 2026-03-20T12:59:00.848 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~57s 2026-03-20T12:59:00.848 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~63s 2026-03-20T12:59:07.155 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~76s 2026-03-20T12:59:07.156 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~63s 2026-03-20T12:59:07.156 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~69s 2026-03-20T12:59:13.462 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~82s 2026-03-20T12:59:13.462 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~69s 2026-03-20T12:59:13.462 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~76s 2026-03-20T12:59:19.770 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~89s 2026-03-20T12:59:19.770 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~76s 2026-03-20T12:59:19.770 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~82s 2026-03-20T12:59:26.076 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~95s 2026-03-20T12:59:26.076 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~82s 2026-03-20T12:59:26.076 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~88s 2026-03-20T12:59:32.383 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~101s 2026-03-20T12:59:32.383 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~88s 2026-03-20T12:59:32.383 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~95s 2026-03-20T12:59:38.690 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~108s 2026-03-20T12:59:38.690 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~95s 2026-03-20T12:59:38.690 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~101s 2026-03-20T12:59:44.996 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~114s 2026-03-20T12:59:44.997 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~101s 2026-03-20T12:59:44.997 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~107s 2026-03-20T12:59:51.303 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~120s 2026-03-20T12:59:51.303 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~107s 2026-03-20T12:59:51.303 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~114s 2026-03-20T12:59:57.609 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~126s 2026-03-20T12:59:57.609 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~114s 2026-03-20T12:59:57.609 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~120s 2026-03-20T12:59:58.872 INFO:tasks.rgw.client.1.vm06.stdout:problem writing to /var/log/ceph/rgw.ceph.client.1.log: (28) No space left on device 2026-03-20T13:00:03.916 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~133s 2026-03-20T13:00:03.916 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~120s 2026-03-20T13:00:03.916 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~126s 2026-03-20T13:00:10.222 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~139s 2026-03-20T13:00:10.222 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~126s 2026-03-20T13:00:10.222 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~133s 2026-03-20T13:00:16.528 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~145s 2026-03-20T13:00:16.528 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~132s 2026-03-20T13:00:16.528 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~139s 2026-03-20T13:00:22.833 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~152s 2026-03-20T13:00:22.833 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~139s 2026-03-20T13:00:22.833 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~145s 2026-03-20T13:00:29.141 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~158s 2026-03-20T13:00:29.141 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~145s 2026-03-20T13:00:29.141 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~151s 2026-03-20T13:00:35.449 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~164s 2026-03-20T13:00:35.449 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~151s 2026-03-20T13:00:35.449 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~158s 2026-03-20T13:00:41.761 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~171s 2026-03-20T13:00:41.762 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~158s 2026-03-20T13:00:41.762 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~164s 2026-03-20T13:00:48.074 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~177s 2026-03-20T13:00:48.074 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~164s 2026-03-20T13:00:48.074 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~170s 2026-03-20T13:00:54.386 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~183s 2026-03-20T13:00:54.387 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~170s 2026-03-20T13:00:54.387 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~177s 2026-03-20T13:01:00.696 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~190s 2026-03-20T13:01:00.696 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~177s 2026-03-20T13:01:00.696 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~183s 2026-03-20T13:01:07.006 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~196s 2026-03-20T13:01:07.007 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~183s 2026-03-20T13:01:07.007 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~189s 2026-03-20T13:01:13.319 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~202s 2026-03-20T13:01:13.319 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~189s 2026-03-20T13:01:13.319 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~196s 2026-03-20T13:01:19.629 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~208s 2026-03-20T13:01:19.629 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~196s 2026-03-20T13:01:19.629 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~202s 2026-03-20T13:01:25.937 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~215s 2026-03-20T13:01:25.937 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~202s 2026-03-20T13:01:25.937 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~208s 2026-03-20T13:01:32.248 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~221s 2026-03-20T13:01:32.249 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~208s 2026-03-20T13:01:32.249 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~215s 2026-03-20T13:01:38.561 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~227s 2026-03-20T13:01:38.562 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~214s 2026-03-20T13:01:38.562 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~221s 2026-03-20T13:01:44.873 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~234s 2026-03-20T13:01:44.873 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~221s 2026-03-20T13:01:44.873 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~227s 2026-03-20T13:01:51.185 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~240s 2026-03-20T13:01:51.186 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~227s 2026-03-20T13:01:51.186 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~234s 2026-03-20T13:01:57.498 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~246s 2026-03-20T13:01:57.499 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~233s 2026-03-20T13:01:57.499 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~240s 2026-03-20T13:02:03.807 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~253s 2026-03-20T13:02:03.807 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~240s 2026-03-20T13:02:03.807 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~246s 2026-03-20T13:02:10.116 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~259s 2026-03-20T13:02:10.116 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~246s 2026-03-20T13:02:10.116 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~252s 2026-03-20T13:02:16.424 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~265s 2026-03-20T13:02:16.424 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~252s 2026-03-20T13:02:16.424 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~259s 2026-03-20T13:02:22.733 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~272s 2026-03-20T13:02:22.733 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~259s 2026-03-20T13:02:22.733 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~265s 2026-03-20T13:02:29.041 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~278s 2026-03-20T13:02:29.041 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~265s 2026-03-20T13:02:29.041 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~271s 2026-03-20T13:02:35.348 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~284s 2026-03-20T13:02:35.349 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~271s 2026-03-20T13:02:35.349 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~278s 2026-03-20T13:02:41.655 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~290s 2026-03-20T13:02:41.655 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~278s 2026-03-20T13:02:41.655 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~284s 2026-03-20T13:02:47.964 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~297s 2026-03-20T13:02:47.965 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~284s 2026-03-20T13:02:47.965 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~290s 2026-03-20T13:02:54.272 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~303s 2026-03-20T13:02:54.272 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~290s 2026-03-20T13:02:54.272 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~297s 2026-03-20T13:02:54.272 INFO:tasks.daemonwatchdog.daemon_watchdog:BARK! unmounting mounts and killing all daemons 2026-03-20T13:02:55.578 INFO:tasks.ceph.osd.0:Sent signal 15 2026-03-20T13:02:55.578 INFO:tasks.ceph.osd.1:Sent signal 15 2026-03-20T13:02:55.578 INFO:tasks.ceph.osd.2:Sent signal 15 2026-03-20T13:02:55.578 INFO:tasks.ceph.osd.3:Sent signal 15 2026-03-20T13:02:55.578 INFO:tasks.ceph.osd.4:Sent signal 15 2026-03-20T13:02:55.579 INFO:tasks.ceph.osd.5:Sent signal 15 2026-03-20T13:02:55.579 INFO:tasks.ceph.osd.6:Sent signal 15 2026-03-20T13:02:55.579 INFO:tasks.ceph.osd.7:Sent signal 15 2026-03-20T13:02:55.579 INFO:tasks.rgw.client.0:Sent signal 15 2026-03-20T13:02:55.579 INFO:tasks.rgw.client.1:Sent signal 15 2026-03-20T13:02:55.579 INFO:tasks.rgw.client.2:Sent signal 15 2026-03-20T13:02:55.579 INFO:tasks.ceph.mgr.y:Sent signal 15 2026-03-20T13:02:55.579 INFO:tasks.ceph.mgr.x:Sent signal 15 2026-03-20T13:02:55.579 INFO:tasks.ceph.osd.0.vm00.stderr:2026-03-20T13:02:55.578+0000 7fd512c16640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 0 (PID: 56927) UID: 0 2026-03-20T13:02:55.579 INFO:tasks.ceph.osd.0.vm00.stderr:2026-03-20T13:02:55.578+0000 7fd512c16640 -1 osd.0 73 *** Got signal Terminated *** 2026-03-20T13:02:55.579 INFO:tasks.ceph.osd.0.vm00.stderr:2026-03-20T13:02:55.578+0000 7fd512c16640 -1 osd.0 73 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-20T13:02:55.579 INFO:tasks.ceph.osd.1.vm00.stderr:2026-03-20T13:02:55.578+0000 7f2bc1e94640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 1 (PID: 56926) UID: 0 2026-03-20T13:02:55.579 INFO:tasks.ceph.osd.1.vm00.stderr:2026-03-20T13:02:55.578+0000 7f2bc1e94640 -1 osd.1 73 *** Got signal Terminated *** 2026-03-20T13:02:55.579 INFO:tasks.ceph.osd.1.vm00.stderr:2026-03-20T13:02:55.578+0000 7f2bc1e94640 -1 osd.1 73 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-20T13:02:55.579 INFO:tasks.ceph.osd.2.vm00.stderr:2026-03-20T13:02:55.578+0000 7fdcdfc4a640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 2 (PID: 56939) UID: 0 2026-03-20T13:02:55.579 INFO:tasks.ceph.osd.2.vm00.stderr:2026-03-20T13:02:55.578+0000 7fdcdfc4a640 -1 osd.2 73 *** Got signal Terminated *** 2026-03-20T13:02:55.579 INFO:tasks.ceph.osd.3.vm00.stderr:2026-03-20T13:02:55.578+0000 7f1b4a598640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 3 (PID: 56945) UID: 0 2026-03-20T13:02:55.579 INFO:tasks.ceph.osd.3.vm00.stderr:2026-03-20T13:02:55.578+0000 7f1b4a598640 -1 osd.3 73 *** Got signal Terminated *** 2026-03-20T13:02:55.579 INFO:tasks.ceph.osd.3.vm00.stderr:2026-03-20T13:02:55.578+0000 7f1b4a598640 -1 osd.3 73 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-20T13:02:55.579 INFO:tasks.ceph.osd.2.vm00.stderr:2026-03-20T13:02:55.578+0000 7fdcdfc4a640 -1 osd.2 73 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-20T13:02:55.579 INFO:tasks.rgw.client.0.vm00.stdout:2026-03-20T13:02:55.578+0000 7f2911291640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper term radosgw --rgw-frontends beast port=80 -n client.0 --cluster ceph -k /etc/ceph/ceph.client.0.keyring --log-file /var/log/ceph/rgw.ceph.client.0.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.0.sock --foreground (PID: 62768) UID: 0 2026-03-20T13:02:55.580 INFO:tasks.rgw.client.0.vm00.stdout:2026-03-20T13:02:55.578+0000 7f2914d31980 -1 shutting down 2026-03-20T13:02:55.580 INFO:tasks.ceph.osd.4.vm06.stderr:2026-03-20T13:02:55.579+0000 7f17d19b6640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 4 (PID: 59228) UID: 0 2026-03-20T13:02:55.580 INFO:tasks.ceph.osd.4.vm06.stderr:2026-03-20T13:02:55.579+0000 7f17d19b6640 -1 osd.4 73 *** Got signal Terminated *** 2026-03-20T13:02:55.580 INFO:tasks.ceph.osd.4.vm06.stderr:2026-03-20T13:02:55.579+0000 7f17d19b6640 -1 osd.4 73 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-20T13:02:55.580 INFO:tasks.ceph.osd.6.vm06.stderr:2026-03-20T13:02:55.579+0000 7fb606af6640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 6 (PID: 59225) UID: 0 2026-03-20T13:02:55.580 INFO:tasks.ceph.osd.6.vm06.stderr:2026-03-20T13:02:55.579+0000 7fb606af6640 -1 osd.6 73 *** Got signal Terminated *** 2026-03-20T13:02:55.580 INFO:tasks.ceph.osd.6.vm06.stderr:2026-03-20T13:02:55.579+0000 7fb606af6640 -1 osd.6 73 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-20T13:02:55.580 INFO:tasks.ceph.osd.7.vm06.stderr:2026-03-20T13:02:55.579+0000 7f9eaf9af640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 7 (PID: 59232) UID: 0 2026-03-20T13:02:55.580 INFO:tasks.ceph.osd.7.vm06.stderr:2026-03-20T13:02:55.579+0000 7f9eaf9af640 -1 osd.7 73 *** Got signal Terminated *** 2026-03-20T13:02:55.580 INFO:tasks.ceph.osd.7.vm06.stderr:2026-03-20T13:02:55.579+0000 7f9eaf9af640 -1 osd.7 73 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-20T13:02:55.580 INFO:tasks.ceph.osd.5.vm06.stderr:2026-03-20T13:02:55.579+0000 7f36bc8af640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 5 (PID: 59230) UID: 0 2026-03-20T13:02:55.580 INFO:tasks.ceph.osd.5.vm06.stderr:2026-03-20T13:02:55.579+0000 7f36bc8af640 -1 osd.5 73 *** Got signal Terminated *** 2026-03-20T13:02:55.580 INFO:tasks.ceph.osd.5.vm06.stderr:2026-03-20T13:02:55.579+0000 7f36bc8af640 -1 osd.5 73 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-20T13:02:55.580 INFO:tasks.rgw.client.1.vm06.stdout:2026-03-20T13:02:55.579+0000 7f24fea4d640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper term radosgw --rgw-frontends beast port=80 -n client.1 --cluster ceph -k /etc/ceph/ceph.client.1.keyring --log-file /var/log/ceph/rgw.ceph.client.1.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.1.sock --foreground (PID: 63402) UID: 0 2026-03-20T13:02:55.580 INFO:tasks.rgw.client.1.vm06.stdout:2026-03-20T13:02:55.579+0000 7f25024ed980 -1 shutting down 2026-03-20T13:02:55.580 INFO:tasks.rgw.client.2.vm09.stdout:2026-03-20T13:02:55.580+0000 7fc782181640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper term radosgw --rgw-frontends beast port=80 -n client.2 --cluster ceph -k /etc/ceph/ceph.client.2.keyring --log-file /var/log/ceph/rgw.ceph.client.2.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.2.sock --foreground (PID: 51069) UID: 0 2026-03-20T13:02:55.581 INFO:tasks.rgw.client.2.vm09.stdout:2026-03-20T13:02:55.580+0000 7fc785c21980 -1 shutting down 2026-03-20T13:02:55.780 INFO:tasks.ceph.mgr.y.vm00.stderr:daemon-helper: command crashed with signal 15 2026-03-20T13:07:50.934 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_large_scale 2026-03-20T13:07:50.934 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T13:07:50.934 INFO:teuthology.orchestra.run.vm00.stdout:WARNING dedup.test_dedup:test_dedup.py:2748 test_dedup_dry_large_scale: failed!! 2026-03-20T13:08:01.544 INFO:teuthology.orchestra.run.vm00.stdout:FAILED [ 97%] 2026-03-20T13:08:01.546 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_cleanup PASSED [100%] 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout:=================================== FAILURES =================================== 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout:__________________________ test_dedup_dry_large_scale __________________________ 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout:self = 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: def _new_conn(self) -> socket.socket: 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: """Establish a socket connection and set nodelay settings on it. 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: :return: New socket connection. 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: """ 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout:> sock = connection.create_connection( 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: (self._dns_host, self.port), 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: self.timeout, 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: source_address=self.source_address, 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: socket_options=self.socket_options, 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/urllib3/connection.py:204: 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/urllib3/util/connection.py:85: in create_connection 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: raise err 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout:address = ('vm00.local', 80), timeout = 60, source_address = None 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout:socket_options = [(6, 1, 1)] 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: def create_connection( 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: address: tuple[str, int], 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: source_address: tuple[str, int] | None = None, 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: socket_options: _TYPE_SOCKET_OPTIONS | None = None, 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: ) -> socket.socket: 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: """Connect to *address* and return the socket object. 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: Convenience function. Connect to *address* (a 2-tuple ``(host, 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: port)``) and return the socket object. Passing the optional 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: *timeout* parameter will set the timeout on the socket instance 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: before attempting to connect. If no *timeout* is supplied, the 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: global default timeout setting returned by :func:`socket.getdefaulttimeout` 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: is used. If *source_address* is set it must be a tuple of (host, port) 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: for the socket to bind as a source address before making the connection. 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: An host of '' or port 0 tells the OS to use the default. 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: """ 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: host, port = address 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: if host.startswith("["): 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: host = host.strip("[]") 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: err = None 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: # Using the value from allowed_gai_family() in the context of getaddrinfo lets 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: # us select whether to work with IPv4 DNS records, IPv6 records, or both. 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: # The original create_connection function always returns all records. 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: family = allowed_gai_family() 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: host.encode("idna") 2026-03-20T13:08:01.547 INFO:teuthology.orchestra.run.vm00.stdout: except UnicodeError: 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: raise LocationParseError(f"'{host}', label empty or too long") from None 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: af, socktype, proto, canonname, sa = res 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: sock = None 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: sock = socket.socket(af, socktype, proto) 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: # If provided, set socket level options before connecting. 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: _set_socket_options(sock, socket_options) 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: if timeout is not _DEFAULT_TIMEOUT: 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: sock.settimeout(timeout) 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: if source_address: 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: sock.bind(source_address) 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout:> sock.connect(sa) 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout:E ConnectionRefusedError: [Errno 111] Connection refused 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/urllib3/util/connection.py:73: ConnectionRefusedError 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout:The above exception was the direct cause of the following exception: 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout:self = 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout:request = 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: def send(self, request): 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: proxy_url = self._proxy_config.proxy_url_for(request.url) 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: manager = self._get_connection_manager(request.url, proxy_url) 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: conn = manager.connection_from_url(request.url) 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: self._setup_ssl_cert(conn, request.url, self._verify) 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: if ensure_boolean( 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: os.environ.get('BOTO_EXPERIMENTAL__ADD_PROXY_HOST_HEADER', '') 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: ): 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: # This is currently an "experimental" feature which provides 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: # no guarantees of backwards compatibility. It may be subject 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: # to change or removal in any patch version. Anyone opting in 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: # to this feature should strictly pin botocore. 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: host = urlparse(request.url).hostname 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: conn.proxy_headers['host'] = host 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: request_target = self._get_request_target(request.url, proxy_url) 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout:> urllib_response = conn.urlopen( 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: method=request.method, 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: url=request_target, 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: body=request.body, 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: headers=request.headers, 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: retries=Retry(False), 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: assert_same_host=False, 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: preload_content=False, 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: decode_content=False, 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: chunked=self._chunked(request.headers), 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/httpsession.py:477: 2026-03-20T13:08:01.548 INFO:teuthology.orchestra.run.vm00.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/urllib3/connectionpool.py:841: in urlopen 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: retries = retries.increment( 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/urllib3/util/retry.py:465: in increment 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: raise reraise(type(error), error, _stacktrace) 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/urllib3/util/util.py:39: in reraise 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: raise value 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/urllib3/connectionpool.py:787: in urlopen 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: response = self._make_request( 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/urllib3/connectionpool.py:493: in _make_request 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: conn.request( 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/awsrequest.py:96: in request 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: rval = super().request(method, url, body, headers, *args, **kwargs) 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/urllib3/connection.py:500: in request 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: self.endheaders() 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout:/home/ubuntu/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/http/client.py:1318: in endheaders 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: self._send_output(message_body, encode_chunked=encode_chunked) 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/awsrequest.py:123: in _send_output 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: self.send(msg) 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/awsrequest.py:223: in send 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: return super().send(str) 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: ^^^^^^^^^^^^^^^^^ 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout:/home/ubuntu/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/http/client.py:1016: in send 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: self.connect() 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/urllib3/connection.py:331: in connect 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: self.sock = self._new_conn() 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: ^^^^^^^^^^^^^^^^ 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout:self = 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: def _new_conn(self) -> socket.socket: 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: """Establish a socket connection and set nodelay settings on it. 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: :return: New socket connection. 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: """ 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: sock = connection.create_connection( 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: (self._dns_host, self.port), 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: self.timeout, 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: source_address=self.source_address, 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: socket_options=self.socket_options, 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: except socket.gaierror as e: 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: raise NameResolutionError(self.host, self, e) from e 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: except SocketTimeout as e: 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: raise ConnectTimeoutError( 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: self, 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: ) from e 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: except OSError as e: 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout:> raise NewConnectionError( 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: self, f"Failed to establish a new connection: {e}" 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: ) from e 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout:E urllib3.exceptions.NewConnectionError: AWSHTTPConnection(host='vm00.local', port=80): Failed to establish a new connection: [Errno 111] Connection refused 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/urllib3/connection.py:219: NewConnectionError 2026-03-20T13:08:01.549 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout:During handling of the above exception, another exception occurred: 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: @pytest.mark.basic_test 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: def test_dedup_dry_large_scale(): 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: #return 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: prepare_test() 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: max_copies_count=3 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: num_threads=64 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: num_files=32*1024 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: size=1*KB 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: files=[] 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: config=TransferConfig(multipart_threshold=size, multipart_chunksize=1*MB) 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: log.debug("test_dedup_dry_large_scale_new: connect to AWS ...") 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: gen_files_fixed_size(files, num_files, size, max_copies_count) 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: conns=get_connections(num_threads) 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: bucket_names=get_buckets(num_threads) 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: for i in range(num_threads): 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: conns[i].create_bucket(Bucket=bucket_names[i]) 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: threads_simple_dedup_with_tenants(files, conns, bucket_names, config, True) 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: except: 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: log.warning("test_dedup_dry_large_scale: failed!!") 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: finally: 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: # cleanup must be executed even after a failure 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout:> cleanup_all_buckets(bucket_names, conns) 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py:2751: 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py:496: in cleanup_all_buckets 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: delete_bucket_with_all_objects(bucket_name, conn) 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py:452: in delete_bucket_with_all_objects 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: listing=conn.list_objects(Bucket=bucket_name, Marker=marker, MaxKeys=max_keys) 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/client.py:602: in _api_call 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: return self._make_api_call(operation_name, kwargs) 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/context.py:123: in wrapper 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: return func(*args, **kwargs) 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: ^^^^^^^^^^^^^^^^^^^^^ 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/client.py:1060: in _make_api_call 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: http, parsed_response = self._make_request( 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/client.py:1084: in _make_request 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: return self._endpoint.make_request(operation_model, request_dict) 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2026-03-20T13:08:01.957 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/endpoint.py:119: in make_request 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: return self._send_request(request_dict, operation_model) 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/endpoint.py:200: in _send_request 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: while self._needs_retry( 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/endpoint.py:360: in _needs_retry 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: responses = self._event_emitter.emit( 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/hooks.py:412: in emit 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: return self._emitter.emit(aliased_event_name, **kwargs) 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/hooks.py:256: in emit 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: return self._emit(event_name, kwargs) 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/hooks.py:239: in _emit 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: response = handler(**kwargs) 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: ^^^^^^^^^^^^^^^^^ 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/retryhandler.py:207: in __call__ 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: if self._checker(**checker_kwargs): 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/retryhandler.py:284: in __call__ 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: should_retry = self._should_retry( 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/retryhandler.py:320: in _should_retry 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: return self._checker(attempt_number, response, caught_exception) 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/retryhandler.py:363: in __call__ 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: checker_response = checker( 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/retryhandler.py:247: in __call__ 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: return self._check_caught_exception( 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/retryhandler.py:416: in _check_caught_exception 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: raise caught_exception 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/endpoint.py:279: in _do_get_response 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: http_response = self._send(request) 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: ^^^^^^^^^^^^^^^^^^^ 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/endpoint.py:383: in _send 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: return self.http_session.send(request) 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout:self = 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout:request = 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: def send(self, request): 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: proxy_url = self._proxy_config.proxy_url_for(request.url) 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: manager = self._get_connection_manager(request.url, proxy_url) 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: conn = manager.connection_from_url(request.url) 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: self._setup_ssl_cert(conn, request.url, self._verify) 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: if ensure_boolean( 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: os.environ.get('BOTO_EXPERIMENTAL__ADD_PROXY_HOST_HEADER', '') 2026-03-20T13:08:01.958 INFO:teuthology.orchestra.run.vm00.stdout: ): 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: # This is currently an "experimental" feature which provides 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: # no guarantees of backwards compatibility. It may be subject 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: # to change or removal in any patch version. Anyone opting in 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: # to this feature should strictly pin botocore. 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: host = urlparse(request.url).hostname 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: conn.proxy_headers['host'] = host 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: request_target = self._get_request_target(request.url, proxy_url) 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: urllib_response = conn.urlopen( 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: method=request.method, 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: url=request_target, 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: body=request.body, 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: headers=request.headers, 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: retries=Retry(False), 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: assert_same_host=False, 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: preload_content=False, 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: decode_content=False, 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: chunked=self._chunked(request.headers), 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: http_response = botocore.awsrequest.AWSResponse( 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: request.url, 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: urllib_response.status, 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: urllib_response.headers, 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: urllib_response, 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: if not request.stream_output: 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: # Cause the raw stream to be exhausted immediately. We do it 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: # this way instead of using preload_content because 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: # preload_content will never buffer chunked responses 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: http_response.content 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: return http_response 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: except URLLib3SSLError as e: 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: raise SSLError(endpoint_url=request.url, error=e) 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: except (NewConnectionError, socket.gaierror) as e: 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout:> raise EndpointConnectionError(endpoint_url=request.url, error=e) 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout:E botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "http://vm00.local:80/eesnkifksinzbtuz-86?marker=&max-keys=1000&encoding-type=url" 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.11/site-packages/botocore/httpsession.py:506: EndpointConnectionError 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout:----------------------------- Captured stderr call ----------------------------- 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout:ignoring --setuser ceph since I am not root 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout:ignoring --setgroup ceph since I am not root 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout:ignoring --setuser ceph since I am not root 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout:ignoring --setgroup ceph since I am not root 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout:ignoring --setuser ceph since I am not root 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout:ignoring --setgroup ceph since I am not root 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout:failed to fetch mon config (--no-mon-config to skip) 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout:------------------------------ Captured log call ------------------------------- 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout:WARNING dedup.test_dedup:test_dedup.py:2748 test_dedup_dry_large_scale: failed!! 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout:=========================== short test summary info ============================ 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout:FAILED test_dedup.py::test_dedup_dry_large_scale - botocore.exceptions.Endpoi... 2026-03-20T13:08:01.959 INFO:teuthology.orchestra.run.vm00.stdout:================== 1 failed, 33 passed in 1535.63s (0:25:35) =================== 2026-03-20T13:08:02.259 INFO:teuthology.orchestra.run.vm00.stdout:py: exit 1 (1536.45 seconds) /home/ubuntu/cephtest/ceph/src/test/rgw/dedup> pytest -v -m 'basic_test or request_test or example_test' pid=64026 2026-03-20T13:08:02.260 INFO:teuthology.orchestra.run.vm00.stdout: py: FAIL code 1 (1539.31=setup[2.87]+cmd[1536.45] seconds) 2026-03-20T13:08:02.260 INFO:teuthology.orchestra.run.vm00.stdout: evaluation failed :( (1539.32 seconds) 2026-03-20T13:08:02.292 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T13:08:02.292 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/contextutil.py", line 30, in nested vars.append(enter()) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 135, in __enter__ return next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/dedup_tests.py", line 191, in run_tests toxvenv_sh(ctx, remote, args, label="dedup tests against rgw") File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/dedup_tests.py", line 165, in toxvenv_sh return remote.sh(['source', activate, run.Raw('&&')] + args, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 97, in sh proc = self.run(**kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed (dedup tests against rgw) on vm00 with status 1: "source /home/ubuntu/cephtest/tox-venv/bin/activate && cd /home/ubuntu/cephtest/ceph/src/test/rgw/dedup/ && DEDUPTESTS_CONF=./deduptests.client.0.conf tox -- -v -m 'basic_test or request_test or example_test'" 2026-03-20T13:08:02.293 INFO:tasks.dedup_tests:Removing dedup-tests.conf file... 2026-03-20T13:08:02.293 DEBUG:teuthology.orchestra.run.vm00:> rm -f /home/ubuntu/cephtest/ceph/src/test/rgw/dedup/deduptests.client.0.conf 2026-03-20T13:08:02.315 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph 2026-03-20T13:08:02.395 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-03-20T13:08:02.395 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T13:13:02.397 INFO:teuthology.orchestra.run.vm00.stderr:failed to fetch mon config (--no-mon-config to skip) 2026-03-20T13:13:02.399 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T13:13:02.400 INFO:tasks.dedup_tests:Removing dedup-tests... 2026-03-20T13:13:02.400 DEBUG:teuthology.orchestra.run.vm00:> rm -rf /home/ubuntu/cephtest/ceph 2026-03-20T13:13:02.935 ERROR:teuthology.run_tasks:Saw exception from tasks. Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/dedup_tests.py", line 107, in create_users yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 30, in nested vars.append(enter()) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 135, in __enter__ return next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/dedup_tests.py", line 191, in run_tests toxvenv_sh(ctx, remote, args, label="dedup tests against rgw") File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/dedup_tests.py", line 165, in toxvenv_sh return remote.sh(['source', activate, run.Raw('&&')] + args, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 97, in sh proc = self.run(**kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed (dedup tests against rgw) on vm00 with status 1: "source /home/ubuntu/cephtest/tox-venv/bin/activate && cd /home/ubuntu/cephtest/ceph/src/test/rgw/dedup/ && DEDUPTESTS_CONF=./deduptests.client.0.conf tox -- -v -m 'basic_test or request_test or example_test'" During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 112, in run_tasks manager.__enter__() File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 135, in __enter__ return next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/dedup_tests.py", line 240, in task with contextutil.nested( File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 135, in __enter__ return next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/dedup_tests.py", line 45, in download yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/dedup_tests.py", line 114, in create_users ctx.cluster.only(client).run( File "/home/teuthos/teuthology/teuthology/orchestra/cluster.py", line 85, in run procs = [remote.run(**kwargs, wait=_wait) for remote in remotes] File "/home/teuthos/teuthology/teuthology/orchestra/cluster.py", line 85, in procs = [remote.run(**kwargs, wait=_wait) for remote in remotes] File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph' 2026-03-20T13:13:02.935 DEBUG:teuthology.run_tasks:Unwinding manager dedup-tests 2026-03-20T13:13:02.937 DEBUG:teuthology.run_tasks:Unwinding manager tox 2026-03-20T13:13:02.939 DEBUG:teuthology.orchestra.run.vm00:> rm -rf /home/ubuntu/cephtest/tox-venv 2026-03-20T13:13:03.019 DEBUG:teuthology.run_tasks:Unwinding manager tox 2026-03-20T13:13:03.021 DEBUG:teuthology.orchestra.run.vm00:> rm -rf /home/ubuntu/cephtest/tox-venv 2026-03-20T13:13:03.034 DEBUG:teuthology.run_tasks:Unwinding manager rgw 2026-03-20T13:13:03.037 DEBUG:tasks.rgw.client.0:waiting for process to exit 2026-03-20T13:13:03.037 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T13:13:03.037 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T13:13:03.037 ERROR:teuthology.orchestra.daemon.state:Error while waiting for process to exit Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/orchestra/daemon/state.py", line 146, in stop run.wait([self.proc], timeout=timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term radosgw --rgw-frontends 'beast port=80' -n client.0 --cluster ceph -k /etc/ceph/ceph.client.0.keyring --log-file /var/log/ceph/rgw.ceph.client.0.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.0.sock --foreground | sudo tee /var/log/ceph/rgw.ceph.client.0.stdout 2>&1" 2026-03-20T13:13:03.037 INFO:tasks.rgw.client.0:Stopped 2026-03-20T13:13:03.037 DEBUG:teuthology.orchestra.run.vm00:> rm -f /home/ubuntu/cephtest/rgw.opslog.ceph.client.0.sock 2026-03-20T13:13:03.087 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/ceph/vault-root-token 2026-03-20T13:13:03.157 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /home/ubuntu/cephtest/url_file 2026-03-20T13:13:03.219 INFO:tasks.util.rgw:rgwadmin: client.0 : ['gc', 'process', '--include-all'] 2026-03-20T13:13:03.219 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.0', '--cluster', 'ceph', 'gc', 'process', '--include-all'] 2026-03-20T13:13:03.219 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all 2026-03-20T13:13:03.290 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-03-20T13:13:03.290 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T13:18:03.292 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T13:18:03.290+0000 7ffa24926900 0 monclient(hunting): authenticate timed out after 300 2026-03-20T13:18:03.292 INFO:teuthology.orchestra.run.vm00.stderr:failed to fetch mon config (--no-mon-config to skip) 2026-03-20T13:18:03.293 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T13:18:03.294 ERROR:teuthology.run_tasks:Manager failed: rgw Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' 2026-03-20T13:18:03.294 DEBUG:teuthology.run_tasks:Unwinding manager openssl_keys 2026-03-20T13:18:03.296 DEBUG:teuthology.run_tasks:Unwinding manager ceph 2026-03-20T13:18:03.298 INFO:tasks.ceph.ceph_manager.ceph:waiting for clean 2026-03-20T13:18:03.298 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-03-20T13:20:03.361 DEBUG:teuthology.orchestra.run:got remote process result: 124 2026-03-20T13:20:03.361 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph.py", line 2001, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph.py", line 2011, in task ctx.managers[config['cluster']].wait_for_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 2919, in wait_for_clean num_active_clean = self.get_num_active_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 2698, in get_num_active_clean pgs = self.get_pg_stats() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 2464, in get_pg_stats out = self.raw_cluster_cmd('pg', 'dump', '--format=json') File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 1696, in raw_cluster_cmd return self.run_cluster_cmd(**kwargs).stdout.getvalue() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 1687, in run_cluster_cmd return self.controller.run(**kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' 2026-03-20T13:20:03.362 INFO:teuthology.misc:Shutting down mds daemons... 2026-03-20T13:20:03.362 INFO:teuthology.misc:Shutting down osd daemons... 2026-03-20T13:20:03.362 DEBUG:tasks.ceph.osd.0:waiting for process to exit 2026-03-20T13:20:03.362 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T13:20:03.362 INFO:tasks.ceph.osd.0:Stopped 2026-03-20T13:20:03.362 DEBUG:tasks.ceph.osd.1:waiting for process to exit 2026-03-20T13:20:03.362 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T13:20:03.362 INFO:tasks.ceph.osd.1:Stopped 2026-03-20T13:20:03.363 DEBUG:tasks.ceph.osd.2:waiting for process to exit 2026-03-20T13:20:03.363 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T13:20:03.363 INFO:tasks.ceph.osd.2:Stopped 2026-03-20T13:20:03.363 DEBUG:tasks.ceph.osd.3:waiting for process to exit 2026-03-20T13:20:03.363 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T13:20:03.363 INFO:tasks.ceph.osd.3:Stopped 2026-03-20T13:20:03.363 DEBUG:tasks.ceph.osd.4:waiting for process to exit 2026-03-20T13:20:03.363 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T13:20:03.363 INFO:tasks.ceph.osd.4:Stopped 2026-03-20T13:20:03.363 DEBUG:tasks.ceph.osd.5:waiting for process to exit 2026-03-20T13:20:03.363 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T13:20:03.363 INFO:tasks.ceph.osd.5:Stopped 2026-03-20T13:20:03.363 DEBUG:tasks.ceph.osd.6:waiting for process to exit 2026-03-20T13:20:03.363 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T13:20:03.363 INFO:tasks.ceph.osd.6:Stopped 2026-03-20T13:20:03.363 DEBUG:tasks.ceph.osd.7:waiting for process to exit 2026-03-20T13:20:03.363 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T13:20:03.363 INFO:tasks.ceph.osd.7:Stopped 2026-03-20T13:20:03.363 INFO:teuthology.misc:Shutting down mgr daemons... 2026-03-20T13:20:03.363 DEBUG:tasks.ceph.mgr.y:waiting for process to exit 2026-03-20T13:20:03.363 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T13:20:03.363 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T13:20:03.363 ERROR:teuthology.orchestra.daemon.state:Error while waiting for process to exit Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph.py", line 2001, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph.py", line 1526, in run_daemon yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph.py", line 2011, in task ctx.managers[config['cluster']].wait_for_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 2919, in wait_for_clean num_active_clean = self.get_num_active_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 2698, in get_num_active_clean pgs = self.get_pg_stats() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 2464, in get_pg_stats out = self.raw_cluster_cmd('pg', 'dump', '--format=json') File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 1696, in raw_cluster_cmd return self.run_cluster_cmd(**kwargs).stdout.getvalue() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 1687, in run_cluster_cmd return self.controller.run(**kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/orchestra/daemon/state.py", line 146, in stop run.wait([self.proc], timeout=timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mgr -f --cluster ceph -i y' 2026-03-20T13:20:03.363 INFO:tasks.ceph.mgr.y:Stopped 2026-03-20T13:20:03.363 DEBUG:tasks.ceph.mgr.x:waiting for process to exit 2026-03-20T13:20:03.363 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T13:20:03.363 INFO:tasks.ceph.mgr.x:Stopped 2026-03-20T13:20:03.363 INFO:teuthology.misc:Shutting down mon daemons... 2026-03-20T13:20:03.363 DEBUG:tasks.ceph.mon.a:waiting for process to exit 2026-03-20T13:20:03.363 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T13:20:03.363 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T13:20:03.363 ERROR:teuthology.orchestra.daemon.state:Error while waiting for process to exit Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph.py", line 2001, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph.py", line 1526, in run_daemon yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph.py", line 2011, in task ctx.managers[config['cluster']].wait_for_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 2919, in wait_for_clean num_active_clean = self.get_num_active_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 2698, in get_num_active_clean pgs = self.get_pg_stats() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 2464, in get_pg_stats out = self.raw_cluster_cmd('pg', 'dump', '--format=json') File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 1696, in raw_cluster_cmd return self.run_cluster_cmd(**kwargs).stdout.getvalue() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 1687, in run_cluster_cmd return self.controller.run(**kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/orchestra/daemon/state.py", line 146, in stop run.wait([self.proc], timeout=timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i a' 2026-03-20T13:20:03.364 INFO:tasks.ceph.mon.a:Stopped 2026-03-20T13:20:03.364 DEBUG:tasks.ceph.mon.c:waiting for process to exit 2026-03-20T13:20:03.364 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T13:20:03.364 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T13:20:03.364 ERROR:teuthology.orchestra.daemon.state:Error while waiting for process to exit Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph.py", line 2001, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph.py", line 1526, in run_daemon yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph.py", line 2011, in task ctx.managers[config['cluster']].wait_for_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 2919, in wait_for_clean num_active_clean = self.get_num_active_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 2698, in get_num_active_clean pgs = self.get_pg_stats() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 2464, in get_pg_stats out = self.raw_cluster_cmd('pg', 'dump', '--format=json') File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 1696, in raw_cluster_cmd return self.run_cluster_cmd(**kwargs).stdout.getvalue() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 1687, in run_cluster_cmd return self.controller.run(**kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/orchestra/daemon/state.py", line 146, in stop run.wait([self.proc], timeout=timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i c' 2026-03-20T13:20:03.364 INFO:tasks.ceph.mon.c:Stopped 2026-03-20T13:20:03.364 DEBUG:tasks.ceph.mon.b:waiting for process to exit 2026-03-20T13:20:03.364 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T13:20:03.364 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T13:20:03.364 ERROR:teuthology.orchestra.daemon.state:Error while waiting for process to exit Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph.py", line 2001, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph.py", line 1526, in run_daemon yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph.py", line 2011, in task ctx.managers[config['cluster']].wait_for_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 2919, in wait_for_clean num_active_clean = self.get_num_active_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 2698, in get_num_active_clean pgs = self.get_pg_stats() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 2464, in get_pg_stats out = self.raw_cluster_cmd('pg', 'dump', '--format=json') File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 1696, in raw_cluster_cmd return self.run_cluster_cmd(**kwargs).stdout.getvalue() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 1687, in run_cluster_cmd return self.controller.run(**kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/orchestra/daemon/state.py", line 146, in stop run.wait([self.proc], timeout=timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm06 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i b' 2026-03-20T13:20:03.364 INFO:tasks.ceph.mon.b:Stopped 2026-03-20T13:20:03.364 INFO:tasks.ceph:Checking cluster log for badness... 2026-03-20T13:20:03.364 DEBUG:teuthology.orchestra.run.vm00:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/ceph.log | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v '\(PG_AVAILABILITY\)' | egrep -v '\(PG_DEGRADED\)' | egrep -v '\(POOL_APP_NOT_ENABLED\)' | egrep -v 'not have an application enabled' | head -n 1 2026-03-20T13:20:03.390 INFO:teuthology.orchestra.run.vm00.stdout:2026-03-20T12:54:33.077384+0000 mon.a (mon.0) 704 : cluster [ERR] Health check failed: mon c is very low on available space (MON_DISK_CRIT) 2026-03-20T13:20:03.390 WARNING:tasks.ceph:Found errors (ERR|WRN|SEC) in cluster log 2026-03-20T13:20:03.390 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-0 on ubuntu@vm00.local 2026-03-20T13:20:03.390 DEBUG:teuthology.orchestra.run.vm00:> sync && sudo umount -f /var/lib/ceph/osd/ceph-0 2026-03-20T13:20:03.514 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-1 on ubuntu@vm00.local 2026-03-20T13:20:03.514 DEBUG:teuthology.orchestra.run.vm00:> sync && sudo umount -f /var/lib/ceph/osd/ceph-1 2026-03-20T13:20:03.595 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-2 on ubuntu@vm00.local 2026-03-20T13:20:03.595 DEBUG:teuthology.orchestra.run.vm00:> sync && sudo umount -f /var/lib/ceph/osd/ceph-2 2026-03-20T13:20:03.676 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-3 on ubuntu@vm00.local 2026-03-20T13:20:03.676 DEBUG:teuthology.orchestra.run.vm00:> sync && sudo umount -f /var/lib/ceph/osd/ceph-3 2026-03-20T13:20:03.757 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-4 on ubuntu@vm06.local 2026-03-20T13:20:03.758 DEBUG:teuthology.orchestra.run.vm06:> sync && sudo umount -f /var/lib/ceph/osd/ceph-4 2026-03-20T13:20:03.874 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-5 on ubuntu@vm06.local 2026-03-20T13:20:03.874 DEBUG:teuthology.orchestra.run.vm06:> sync && sudo umount -f /var/lib/ceph/osd/ceph-5 2026-03-20T13:20:03.972 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-6 on ubuntu@vm06.local 2026-03-20T13:20:03.972 DEBUG:teuthology.orchestra.run.vm06:> sync && sudo umount -f /var/lib/ceph/osd/ceph-6 2026-03-20T13:20:04.073 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-7 on ubuntu@vm06.local 2026-03-20T13:20:04.073 DEBUG:teuthology.orchestra.run.vm06:> sync && sudo umount -f /var/lib/ceph/osd/ceph-7 2026-03-20T13:20:04.166 INFO:tasks.ceph:Archiving mon data... 2026-03-20T13:20:04.166 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/lib/ceph/mon/ceph-a to /archive/kyr-2026-03-20_12:32:34-rgw-tentacle-none-default-vps/2137/data/mon.a.tgz 2026-03-20T13:20:04.166 DEBUG:teuthology.orchestra.run.vm00:> mktemp 2026-03-20T13:20:04.183 INFO:teuthology.orchestra.run.vm00.stdout:/tmp/tmp.OqzmmGhHlF 2026-03-20T13:20:04.183 DEBUG:teuthology.orchestra.run.vm00:> sudo tar cz -f - -C /var/lib/ceph/mon/ceph-a -- . > /tmp/tmp.OqzmmGhHlF 2026-03-20T13:20:04.322 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0666 /tmp/tmp.OqzmmGhHlF 2026-03-20T13:20:04.401 DEBUG:teuthology.orchestra.remote:vm00:/tmp/tmp.OqzmmGhHlF is 499KB 2026-03-20T13:20:04.457 DEBUG:teuthology.orchestra.run.vm00:> rm -fr /tmp/tmp.OqzmmGhHlF 2026-03-20T13:20:04.470 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/lib/ceph/mon/ceph-c to /archive/kyr-2026-03-20_12:32:34-rgw-tentacle-none-default-vps/2137/data/mon.c.tgz 2026-03-20T13:20:04.470 DEBUG:teuthology.orchestra.run.vm00:> mktemp 2026-03-20T13:20:04.523 INFO:teuthology.orchestra.run.vm00.stdout:/tmp/tmp.dXlpvqeESl 2026-03-20T13:20:04.524 DEBUG:teuthology.orchestra.run.vm00:> sudo tar cz -f - -C /var/lib/ceph/mon/ceph-c -- . > /tmp/tmp.dXlpvqeESl 2026-03-20T13:20:04.663 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0666 /tmp/tmp.dXlpvqeESl 2026-03-20T13:20:04.740 DEBUG:teuthology.orchestra.remote:vm00:/tmp/tmp.dXlpvqeESl is 518KB 2026-03-20T13:20:04.797 DEBUG:teuthology.orchestra.run.vm00:> rm -fr /tmp/tmp.dXlpvqeESl 2026-03-20T13:20:04.810 DEBUG:teuthology.misc:Transferring archived files from vm06:/var/lib/ceph/mon/ceph-b to /archive/kyr-2026-03-20_12:32:34-rgw-tentacle-none-default-vps/2137/data/mon.b.tgz 2026-03-20T13:20:04.810 DEBUG:teuthology.orchestra.run.vm06:> mktemp 2026-03-20T13:20:04.827 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T13:20:04.827 INFO:teuthology.orchestra.run.vm06.stderr:mktemp: failed to create file via template ‘/tmp/tmp.XXXXXXXXXX’: No space left on device 2026-03-20T13:20:04.867 INFO:teuthology.util.scanner:summary_data or yaml_file is empty! 2026-03-20T13:20:04.884 INFO:teuthology.util.scanner:summary_data or yaml_file is empty! 2026-03-20T13:20:04.902 INFO:teuthology.util.scanner:summary_data or yaml_file is empty! 2026-03-20T13:20:04.902 INFO:tasks.ceph:Archiving crash dumps... 2026-03-20T13:20:04.902 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/lib/ceph/crash to /archive/kyr-2026-03-20_12:32:34-rgw-tentacle-none-default-vps/2137/remote/vm00/crash 2026-03-20T13:20:04.902 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /var/lib/ceph/crash -- . 2026-03-20T13:20:04.933 DEBUG:teuthology.misc:Transferring archived files from vm06:/var/lib/ceph/crash to /archive/kyr-2026-03-20_12:32:34-rgw-tentacle-none-default-vps/2137/remote/vm06/crash 2026-03-20T13:20:04.933 DEBUG:teuthology.orchestra.run.vm06:> sudo tar c -f - -C /var/lib/ceph/crash -- . 2026-03-20T13:20:04.961 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/lib/ceph/crash to /archive/kyr-2026-03-20_12:32:34-rgw-tentacle-none-default-vps/2137/remote/vm09/crash 2026-03-20T13:20:04.961 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/lib/ceph/crash -- . 2026-03-20T13:20:04.990 INFO:tasks.ceph:Compressing logs... 2026-03-20T13:20:04.990 DEBUG:teuthology.orchestra.run.vm00:> time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-20T13:20:04.992 DEBUG:teuthology.orchestra.run.vm06:> time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-20T13:20:05.004 DEBUG:teuthology.orchestra.run.vm09:> time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-20T13:20:05.013 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph.tmp-client.admin.50183.log 2026-03-20T13:20:05.013 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.0.log 2026-03-20T13:20:05.013 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph.tmp-client.admin.50183.log: gzip -5 --verbose -- /var/log/ceph/ceph-osd.1.log 2026-03-20T13:20:05.013 INFO:teuthology.orchestra.run.vm00.stderr: 0.0% -- replaced with /var/log/ceph/ceph.tmp-client.admin.50183.log.gz 2026-03-20T13:20:05.013 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-osd.0.log: /var/log/ceph/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/ceph-osd.2.log 2026-03-20T13:20:05.014 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/ceph-osd.3.log 2026-03-20T13:20:05.018 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-osd.3.log: gzip -5 --verbose -- /var/log/ceph/ceph-mon.a.log 2026-03-20T13:20:05.025 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.4.log 2026-03-20T13:20:05.025 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.5.log 2026-03-20T13:20:05.025 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.6.log 2026-03-20T13:20:05.026 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-osd.4.log.gz: No space left on device 2026-03-20T13:20:05.026 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-osd.5.log.gz: No space left on device 2026-03-20T13:20:05.026 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.7.log 2026-03-20T13:20:05.026 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-mon.b.log 2026-03-20T13:20:05.026 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-osd.6.log.gz: No space left on device 2026-03-20T13:20:05.026 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-osd.7.log.gz: No space left on device 2026-03-20T13:20:05.026 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph.log 2026-03-20T13:20:05.026 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-mgr.x.log 2026-03-20T13:20:05.026 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-mon.b.log.gz: No space left on device 2026-03-20T13:20:05.027 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph.log.gz: No space left on device 2026-03-20T13:20:05.027 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.58778.log 2026-03-20T13:20:05.027 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-mgr.x.log.gz: No space left on device 2026-03-20T13:20:05.027 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.58825.log 2026-03-20T13:20:05.027 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-client.admin.58778.log.gz: No space left on device 2026-03-20T13:20:05.027 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph.audit.log 2026-03-20T13:20:05.027 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-client.admin.58825.log.gz: No space left on device 2026-03-20T13:20:05.027 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.58872.log 2026-03-20T13:20:05.028 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph.audit.log.gz: No space left on device 2026-03-20T13:20:05.028 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.58919.log 2026-03-20T13:20:05.028 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.58966.log 2026-03-20T13:20:05.028 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-client.admin.58872.log.gz: No space left on device 2026-03-20T13:20:05.028 INFO:teuthology.orchestra.run.vm06.stderr:gzip: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.59013.log 2026-03-20T13:20:05.028 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/ceph-client.admin.58919.log.gz: No space left on device 2026-03-20T13:20:05.029 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-client.admin.58966.log.gz: No space left on device 2026-03-20T13:20:05.029 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.59060.log 2026-03-20T13:20:05.029 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.59107.log 2026-03-20T13:20:05.029 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-client.admin.59013.log.gz: No space left on device 2026-03-20T13:20:05.029 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62642.log 2026-03-20T13:20:05.029 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-client.admin.59060.log.gz: No space left on device 2026-03-20T13:20:05.029 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-client.admin.59107.log.gz: No space left on device 2026-03-20T13:20:05.029 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62689.log 2026-03-20T13:20:05.030 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-client.admin.62642.log.gz: No space left on device 2026-03-20T13:20:05.030 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62736.log 2026-03-20T13:20:05.030 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-client.admin.62689.log.gz: No space left on device 2026-03-20T13:20:05.030 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62783.log 2026-03-20T13:20:05.030 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-mon.c.log 2026-03-20T13:20:05.030 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.1.62806.log 2026-03-20T13:20:05.031 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-client.admin.62736.log.gz: No space left on device 2026-03-20T13:20:05.031 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-client.admin.62783.log.gz: No space left on device 2026-03-20T13:20:05.031 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.1.62913.log 2026-03-20T13:20:05.031 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.1.63015.log 2026-03-20T13:20:05.031 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-client.1.62806.log.gz: No space left on device 2026-03-20T13:20:05.031 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.1.63117.log 2026-03-20T13:20:05.031 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-client.1.62913.log.gz: No space left on device 2026-03-20T13:20:05.031 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-client.1.63015.log.gz: No space left on device 2026-03-20T13:20:05.031 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.1.63219.log 2026-03-20T13:20:05.032 INFO:teuthology.orchestra.run.vm06.stderr:gzip: gzip -5 --verbose -- /var/log/ceph/rgw.ceph.client.1.log 2026-03-20T13:20:05.032 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/ceph-client.1.63117.log.gz: No space left on device 2026-03-20T13:20:05.032 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ops-log-ceph-client.1.log 2026-03-20T13:20:05.032 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-client.1.63219.log.gz: No space left on device 2026-03-20T13:20:05.032 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/rgw.ceph.client.1.log.gz: No space left on device 2026-03-20T13:20:05.032 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ops-log-ceph-client.1.log.gz: No space left on device 2026-03-20T13:20:05.033 INFO:teuthology.orchestra.run.vm06.stderr: 2026-03-20T13:20:05.033 INFO:teuthology.orchestra.run.vm06.stderr:real 0m0.017s 2026-03-20T13:20:05.033 INFO:teuthology.orchestra.run.vm06.stderr:user 0m0.014s 2026-03-20T13:20:05.033 INFO:teuthology.orchestra.run.vm06.stderr:sys 0m0.019s 2026-03-20T13:20:05.044 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.56605.log 2026-03-20T13:20:05.051 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-mon.c.log: gzip -5 --verbose -- /var/log/ceph/ceph.log 2026-03-20T13:20:05.056 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.50308.log 2026-03-20T13:20:05.056 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.50355.log 2026-03-20T13:20:05.056 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.50402.log 2026-03-20T13:20:05.056 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/ceph-client.admin.50308.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.50308.log.gz 2026-03-20T13:20:05.056 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5/var/log/ceph/ceph-client.admin.50355.log: --verbose -- /var/log/ceph/ceph-client.admin.50449.log 2026-03-20T13:20:05.056 INFO:teuthology.orchestra.run.vm09.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.50355.log.gz 2026-03-20T13:20:05.057 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.2.50472.log 2026-03-20T13:20:05.057 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/ceph-client.admin.50402.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.50402.log.gz 2026-03-20T13:20:05.057 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.2.50579.log 2026-03-20T13:20:05.057 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/ceph-client.admin.50449.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.50449.log.gz 2026-03-20T13:20:05.057 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/ceph-client.2.50472.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.2.50681.log 2026-03-20T13:20:05.057 INFO:teuthology.orchestra.run.vm09.stderr: 83.2% -- replaced with /var/log/ceph/ceph-client.2.50472.log.gz 2026-03-20T13:20:05.057 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/ceph-client.2.50579.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.2.50783.log 2026-03-20T13:20:05.058 INFO:teuthology.orchestra.run.vm09.stderr: 45.6% -- replaced with /var/log/ceph/ceph-client.2.50579.log.gz 2026-03-20T13:20:05.058 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.2.50885.log 2026-03-20T13:20:05.058 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/ceph-client.2.50681.log: 45.6% -- replaced with /var/log/ceph/ceph-client.2.50681.log.gz 2026-03-20T13:20:05.058 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/ceph-client.2.50783.log: 44.2%gzip -- replaced with /var/log/ceph/ceph-client.2.50783.log.gz -5 2026-03-20T13:20:05.058 INFO:teuthology.orchestra.run.vm09.stderr: --verbose -- /var/log/ceph/rgw.ceph.client.2.log 2026-03-20T13:20:05.058 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/ceph-client.2.50885.log: gzip -5 --verbose -- /var/log/ceph/ops-log-ceph-client.2.log 2026-03-20T13:20:05.058 INFO:teuthology.orchestra.run.vm09.stderr: 44.2% -- replaced with /var/log/ceph/ceph-client.2.50885.log.gz 2026-03-20T13:20:05.059 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/rgw.ceph.client.2.log: /var/log/ceph/ops-log-ceph-client.2.log: 35.8% -- replaced with /var/log/ceph/ops-log-ceph-client.2.log.gz 2026-03-20T13:20:05.061 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.56605.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.56605.log.gz 2026-03-20T13:20:05.061 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-mgr.y.log 2026-03-20T13:20:05.064 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph.log: 92.8% -- replaced with /var/log/ceph/ceph.log.gz 2026-03-20T13:20:05.073 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.56686.log 2026-03-20T13:20:05.081 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-mgr.y.log: gzip -5 --verbose -- /var/log/ceph/ceph.audit.log 2026-03-20T13:20:05.081 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.56686.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.56686.log.gz 2026-03-20T13:20:05.087 INFO:teuthology.orchestra.run.vm00.stderr: 94.5% -- replaced with /var/log/ceph/ceph-mgr.y.log.gz 2026-03-20T13:20:05.091 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.56974.log 2026-03-20T13:20:05.093 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph.audit.log: 94.4% -- replaced with /var/log/ceph/ceph.audit.log.gz 2026-03-20T13:20:05.098 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.60253.log 2026-03-20T13:20:05.098 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.56974.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.56974.log.gz 2026-03-20T13:20:05.108 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.60319.log 2026-03-20T13:20:05.108 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.60253.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.60253.log.gz 2026-03-20T13:20:05.113 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.60389.log 2026-03-20T13:20:05.123 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.60319.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.60319.log.gz 2026-03-20T13:20:05.123 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.60414.log 2026-03-20T13:20:05.123 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.60389.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.60389.log.gz 2026-03-20T13:20:05.127 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.60479.log 2026-03-20T13:20:05.138 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.60414.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.60414.log.gz 2026-03-20T13:20:05.138 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.60528.log 2026-03-20T13:20:05.138 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.60479.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.60479.log.gz 2026-03-20T13:20:05.144 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.60577.log 2026-03-20T13:20:05.144 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.60528.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.60528.log.gz 2026-03-20T13:20:05.153 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.60626.log 2026-03-20T13:20:05.153 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.60577.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.60577.log.gz 2026-03-20T13:20:05.159 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.60858.log 2026-03-20T13:20:05.159 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.60626.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.60626.log.gz 2026-03-20T13:20:05.169 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.60864.log 2026-03-20T13:20:05.169 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.60858.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.60858.log.gz 2026-03-20T13:20:05.178 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.60857.log 2026-03-20T13:20:05.178 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.60864.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.60864.log.gz 2026-03-20T13:20:05.187 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.60862.log 2026-03-20T13:20:05.187 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.60857.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.60857.log.gz 2026-03-20T13:20:05.193 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.60856.log 2026-03-20T13:20:05.195 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.60862.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.60862.log.gz 2026-03-20T13:20:05.211 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.60849.log 2026-03-20T13:20:05.211 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.60856.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.60856.log.gz 2026-03-20T13:20:05.221 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.60855.log 2026-03-20T13:20:05.221 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.60849.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.60849.log.gz 2026-03-20T13:20:05.231 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.60852.log 2026-03-20T13:20:05.232 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.60855.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.60855.log.gz 2026-03-20T13:20:05.242 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.61091.log 2026-03-20T13:20:05.242 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.60852.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.60852.log.gz 2026-03-20T13:20:05.253 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.61082.log 2026-03-20T13:20:05.253 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.61091.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.61091.log.gz 2026-03-20T13:20:05.263 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.61193.log 2026-03-20T13:20:05.263 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.61082.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.61082.log.gz 2026-03-20T13:20:05.276 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.61220.log 2026-03-20T13:20:05.276 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.61193.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.61193.log.gz 2026-03-20T13:20:05.286 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.61259.log 2026-03-20T13:20:05.286 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.61220.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.61220.log.gz 2026-03-20T13:20:05.297 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.61243.log 2026-03-20T13:20:05.297 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.61259.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.61259.log.gz 2026-03-20T13:20:05.307 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.61248.log 2026-03-20T13:20:05.307 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.61243.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.61243.log.gz 2026-03-20T13:20:05.317 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.61315.log 2026-03-20T13:20:05.318 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.61248.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.61248.log.gz 2026-03-20T13:20:05.328 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.61459.log 2026-03-20T13:20:05.328 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.61315.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.61315.log.gz 2026-03-20T13:20:05.338 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.61506.log 2026-03-20T13:20:05.339 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.61459.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.61459.log.gz 2026-03-20T13:20:05.349 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.61543.log 2026-03-20T13:20:05.349 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.61506.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.61506.log.gz 2026-03-20T13:20:05.360 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.61655.log 2026-03-20T13:20:05.360 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.61543.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.61543.log.gz 2026-03-20T13:20:05.370 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.61654.log 2026-03-20T13:20:05.370 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.61655.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.61655.log.gz 2026-03-20T13:20:05.377 INFO:teuthology.orchestra.run.vm00.stderr: 92.4% -- replaced with /var/log/ceph/ceph-mon.c.log.gz 2026-03-20T13:20:05.380 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.61658.log 2026-03-20T13:20:05.380 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.61654.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.61654.log.gz 2026-03-20T13:20:05.389 INFO:teuthology.orchestra.run.vm09.stderr: 93.6% -- replaced with /var/log/ceph/rgw.ceph.client.2.log.gz 2026-03-20T13:20:05.391 INFO:teuthology.orchestra.run.vm09.stderr: 2026-03-20T13:20:05.391 INFO:teuthology.orchestra.run.vm09.stderr:real 0m0.344s 2026-03-20T13:20:05.391 INFO:teuthology.orchestra.run.vm09.stderr:user 0m0.327s 2026-03-20T13:20:05.391 INFO:teuthology.orchestra.run.vm09.stderr:sys 0m0.029s 2026-03-20T13:20:05.393 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.61666.log 2026-03-20T13:20:05.393 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.61658.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.61658.log.gz 2026-03-20T13:20:05.399 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.61708.log 2026-03-20T13:20:05.402 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.61666.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.61666.log.gz 2026-03-20T13:20:05.406 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.61851.log 2026-03-20T13:20:05.406 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.61708.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.61708.log.gz 2026-03-20T13:20:05.419 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.61900.log 2026-03-20T13:20:05.419 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.61851.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.61851.log.gz 2026-03-20T13:20:05.432 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.61949.log 2026-03-20T13:20:05.432 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.61900.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.61900.log.gz 2026-03-20T13:20:05.445 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.61996.log 2026-03-20T13:20:05.445 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.61949.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.61949.log.gz 2026-03-20T13:20:05.456 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62045.log 2026-03-20T13:20:05.456 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.61996.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.61996.log.gz 2026-03-20T13:20:05.465 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62092.log 2026-03-20T13:20:05.465 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62045.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62045.log.gz 2026-03-20T13:20:05.476 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62141.log 2026-03-20T13:20:05.476 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62092.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62092.log.gz 2026-03-20T13:20:05.484 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.0.62164.log 2026-03-20T13:20:05.484 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62141.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62141.log.gz 2026-03-20T13:20:05.495 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.0.62279.log 2026-03-20T13:20:05.495 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.62164.log: 95.0% -- replaced with /var/log/ceph/ceph-client.0.62164.log.gz 2026-03-20T13:20:05.503 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.0.62381.log 2026-03-20T13:20:05.506 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.62279.log: 46.0% -- replaced with /var/log/ceph/ceph-client.0.62279.log.gz 2026-03-20T13:20:05.516 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.0.62483.log 2026-03-20T13:20:05.517 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.62381.log: 45.3% -- replaced with /var/log/ceph/ceph-client.0.62381.log.gz 2026-03-20T13:20:05.524 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.0.62585.log 2026-03-20T13:20:05.527 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.62483.log: 45.3% -- replaced with /var/log/ceph/ceph-client.0.62483.log.gz 2026-03-20T13:20:05.537 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/rgw.ceph.client.0.log 2026-03-20T13:20:05.538 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.62585.log: 44.2% -- replaced with /var/log/ceph/ceph-client.0.62585.log.gz 2026-03-20T13:20:05.547 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ops-log-ceph-client.0.log 2026-03-20T13:20:05.557 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/rgw.ceph.client.0.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.0.63855.log 2026-03-20T13:20:05.568 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ops-log-ceph-client.0.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.64044.log 2026-03-20T13:20:05.568 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.63855.log: 84.5% -- replaced with /var/log/ceph/ceph-client.0.63855.log.gz 2026-03-20T13:20:05.580 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.64185.log 2026-03-20T13:20:05.589 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.64044.log: 83.1% -- replaced with /var/log/ceph/ceph-client.admin.64044.log.gz 2026-03-20T13:20:05.599 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.64219.log 2026-03-20T13:20:05.599 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.64185.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.64185.log.gz 2026-03-20T13:20:05.617 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.64253.log 2026-03-20T13:20:05.617 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.64219.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.64219.log.gz 2026-03-20T13:20:05.631 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.64350.log 2026-03-20T13:20:05.640 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.64253.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.64253.log.gz 2026-03-20T13:20:05.650 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.64447.log 2026-03-20T13:20:05.650 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.64350.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.64350.log.gz 2026-03-20T13:20:05.670 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.64630.log 2026-03-20T13:20:05.670 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.64447.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.64447.log.gz 2026-03-20T13:20:05.680 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.64664.log 2026-03-20T13:20:05.680 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.64630.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.64630.log.gz 2026-03-20T13:20:05.691 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.64698.log 2026-03-20T13:20:05.691 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.64664.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.64664.log.gz 2026-03-20T13:20:05.705 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.64809.log 2026-03-20T13:20:05.714 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.64698.log: 82.7% -- replaced with /var/log/ceph/ceph-client.admin.64698.log.gz 2026-03-20T13:20:05.724 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.64907.log 2026-03-20T13:20:05.724 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.64809.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.64809.log.gz 2026-03-20T13:20:05.734 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.65004.log 2026-03-20T13:20:05.744 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.64907.log: 93.4% -- replaced with /var/log/ceph/ceph-client.admin.64907.log.gz 2026-03-20T13:20:05.754 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.65038.log 2026-03-20T13:20:05.754 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.65004.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.65004.log.gz 2026-03-20T13:20:05.764 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.65074.log 2026-03-20T13:20:05.764 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.65038.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.65038.log.gz 2026-03-20T13:20:05.775 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.65108.log 2026-03-20T13:20:05.775 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.65074.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.65074.log.gz 2026-03-20T13:20:05.795 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.65935.log 2026-03-20T13:20:05.795 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.65108.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.65108.log.gz 2026-03-20T13:20:05.808 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.65969.log 2026-03-20T13:20:05.808 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.65935.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.65935.log.gz 2026-03-20T13:20:05.822 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66003.log 2026-03-20T13:20:05.822 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.65969.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.65969.log.gz 2026-03-20T13:20:05.826 INFO:teuthology.orchestra.run.vm00.stderr: 91.3% -- replaced with /var/log/ceph/ceph-mon.a.log.gz 2026-03-20T13:20:05.832 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66107.log 2026-03-20T13:20:05.832 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66003.log: 82.9% -- replaced with /var/log/ceph/ceph-client.admin.66003.log.gz 2026-03-20T13:20:05.846 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66205.log 2026-03-20T13:20:05.846 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66107.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.66107.log.gz 2026-03-20T13:20:05.859 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66302.log 2026-03-20T13:20:05.869 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66205.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66336.log 2026-03-20T13:20:05.869 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66302.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66302.log.gz 2026-03-20T13:20:05.872 INFO:teuthology.orchestra.run.vm00.stderr: 96.7% -- replaced with /var/log/ceph/ceph-client.admin.66205.log.gz 2026-03-20T13:20:05.879 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66371.log 2026-03-20T13:20:05.879 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66336.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66336.log.gz 2026-03-20T13:20:05.884 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66405.log 2026-03-20T13:20:05.894 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66371.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66371.log.gz 2026-03-20T13:20:05.894 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66493.log 2026-03-20T13:20:05.894 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66405.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66405.log.gz 2026-03-20T13:20:05.899 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66527.log 2026-03-20T13:20:05.909 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66493.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66493.log.gz 2026-03-20T13:20:05.909 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66561.log 2026-03-20T13:20:05.909 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66527.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66527.log.gz 2026-03-20T13:20:05.923 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66664.log 2026-03-20T13:20:05.923 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66561.log: 83.1% -- replaced with /var/log/ceph/ceph-client.admin.66561.log.gz 2026-03-20T13:20:05.933 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66762.log 2026-03-20T13:20:05.933 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66664.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.66664.log.gz 2026-03-20T13:20:05.939 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66859.log 2026-03-20T13:20:05.948 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66762.log: 93.0% -- replaced with /var/log/ceph/ceph-client.admin.66762.log.gz 2026-03-20T13:20:05.951 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66893.log 2026-03-20T13:20:05.951 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66859.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66859.log.gz 2026-03-20T13:20:05.954 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66929.log 2026-03-20T13:20:05.963 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66893.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66893.log.gz 2026-03-20T13:20:05.966 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66963.log 2026-03-20T13:20:05.966 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66929.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66929.log.gz 2026-03-20T13:20:05.977 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67177.log 2026-03-20T13:20:05.977 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66963.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66963.log.gz 2026-03-20T13:20:05.987 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67211.log 2026-03-20T13:20:05.987 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67177.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67177.log.gz 2026-03-20T13:20:05.999 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67245.log 2026-03-20T13:20:05.999 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67211.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67211.log.gz 2026-03-20T13:20:06.009 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67348.log 2026-03-20T13:20:06.009 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67245.log: 83.1% -- replaced with /var/log/ceph/ceph-client.admin.67245.log.gz 2026-03-20T13:20:06.014 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67446.log 2026-03-20T13:20:06.024 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67348.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.67348.log.gz 2026-03-20T13:20:06.024 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67543.log 2026-03-20T13:20:06.027 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67446.log: 89.6% -- replaced with /var/log/ceph/ceph-client.admin.67446.log.gz 2026-03-20T13:20:06.030 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67577.log 2026-03-20T13:20:06.039 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67543.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67543.log.gz 2026-03-20T13:20:06.039 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67613.log 2026-03-20T13:20:06.039 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67577.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67577.log.gz 2026-03-20T13:20:06.044 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67647.log 2026-03-20T13:20:06.054 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67613.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67613.log.gz 2026-03-20T13:20:06.054 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67781.log 2026-03-20T13:20:06.054 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67647.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67647.log.gz 2026-03-20T13:20:06.066 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67815.log 2026-03-20T13:20:06.066 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67781.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67781.log.gz 2026-03-20T13:20:06.076 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67849.log 2026-03-20T13:20:06.076 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67815.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67815.log.gz 2026-03-20T13:20:06.081 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67952.log 2026-03-20T13:20:06.091 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67849.log: 83.0% -- replaced with /var/log/ceph/ceph-client.admin.67849.log.gz 2026-03-20T13:20:06.091 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.68050.log 2026-03-20T13:20:06.091 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67952.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.67952.log.gz 2026-03-20T13:20:06.098 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.68147.log 2026-03-20T13:20:06.106 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.68050.log: 93.2% -- replaced with /var/log/ceph/ceph-client.admin.68050.log.gz 2026-03-20T13:20:06.109 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.68181.log 2026-03-20T13:20:06.109 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.68147.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.68147.log.gz 2026-03-20T13:20:06.117 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.68217.log 2026-03-20T13:20:06.117 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.68181.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.68181.log.gz 2026-03-20T13:20:06.127 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.68251.log 2026-03-20T13:20:06.127 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.68217.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.68217.log.gz 2026-03-20T13:20:06.132 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69109.log 2026-03-20T13:20:06.142 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.68251.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.68251.log.gz 2026-03-20T13:20:06.142 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69143.log 2026-03-20T13:20:06.142 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69109.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.69109.log.gz 2026-03-20T13:20:06.147 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69177.log 2026-03-20T13:20:06.157 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69143.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.69143.log.gz 2026-03-20T13:20:06.157 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69280.log 2026-03-20T13:20:06.157 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69177.log: 83.0% -- replaced with /var/log/ceph/ceph-client.admin.69177.log.gz 2026-03-20T13:20:06.162 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69378.log 2026-03-20T13:20:06.169 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69280.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.69280.log.gz 2026-03-20T13:20:06.172 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69475.log 2026-03-20T13:20:06.175 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69378.log: 90.3% -- replaced with /var/log/ceph/ceph-client.admin.69378.log.gz 2026-03-20T13:20:06.185 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69509.log 2026-03-20T13:20:06.185 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69475.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.69475.log.gz 2026-03-20T13:20:06.195 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69545.log 2026-03-20T13:20:06.195 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69509.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.69509.log.gz 2026-03-20T13:20:06.206 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69579.log 2026-03-20T13:20:06.206 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69545.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.69545.log.gz 2026-03-20T13:20:06.216 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69613.log 2026-03-20T13:20:06.216 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69579.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.69579.log.gz 2026-03-20T13:20:06.229 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69710.log 2026-03-20T13:20:06.229 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69613.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.69613.log.gz 2026-03-20T13:20:06.239 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69807.log 2026-03-20T13:20:06.239 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69710.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.69710.log.gz 2026-03-20T13:20:06.244 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70132.log 2026-03-20T13:20:06.254 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69807.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.69807.log.gz 2026-03-20T13:20:06.254 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70166.log 2026-03-20T13:20:06.254 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70132.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.70132.log.gz 2026-03-20T13:20:06.268 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70200.log 2026-03-20T13:20:06.268 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70166.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.70166.log.gz 2026-03-20T13:20:06.278 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70303.log 2026-03-20T13:20:06.278 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70200.log: 83.0% -- replaced with /var/log/ceph/ceph-client.admin.70200.log.gz 2026-03-20T13:20:06.283 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70401.log 2026-03-20T13:20:06.293 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70303.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.70303.log.gz 2026-03-20T13:20:06.293 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70498.log 2026-03-20T13:20:06.296 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70401.log: 94.2% -- replaced with /var/log/ceph/ceph-client.admin.70401.log.gz 2026-03-20T13:20:06.299 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70532.log 2026-03-20T13:20:06.299 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70498.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.70498.log.gz 2026-03-20T13:20:06.308 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70568.log 2026-03-20T13:20:06.308 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70532.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.70532.log.gz 2026-03-20T13:20:06.318 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70602.log 2026-03-20T13:20:06.318 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70568.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.70568.log.gz 2026-03-20T13:20:06.328 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70636.log 2026-03-20T13:20:06.328 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70602.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.70602.log.gz 2026-03-20T13:20:06.333 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70733.log 2026-03-20T13:20:06.343 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70636.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70830.log 2026-03-20T13:20:06.343 INFO:teuthology.orchestra.run.vm00.stderr: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.70636.log.gz 2026-03-20T13:20:06.343 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71562.log 2026-03-20T13:20:06.343 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70733.log: /var/log/ceph/ceph-client.admin.70830.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.70733.log.gz 2026-03-20T13:20:06.343 INFO:teuthology.orchestra.run.vm00.stderr: 85.3% -- replaced with /var/log/ceph/ceph-client.admin.70830.log.gz 2026-03-20T13:20:06.354 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71596.log 2026-03-20T13:20:06.354 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71562.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.71562.log.gz 2026-03-20T13:20:06.359 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71630.log 2026-03-20T13:20:06.369 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71596.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.71596.log.gz 2026-03-20T13:20:06.369 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71733.log 2026-03-20T13:20:06.369 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71630.log: 83.1% -- replaced with /var/log/ceph/ceph-client.admin.71630.log.gz 2026-03-20T13:20:06.383 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71831.log 2026-03-20T13:20:06.383 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71733.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.71733.log.gz 2026-03-20T13:20:06.393 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71928.log 2026-03-20T13:20:06.398 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71831.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71962.log 2026-03-20T13:20:06.398 INFO:teuthology.orchestra.run.vm00.stderr: 96.3% -- replaced with /var/log/ceph/ceph-client.admin.71831.log.gz 2026-03-20T13:20:06.404 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71928.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.71928.log.gz 2026-03-20T13:20:06.408 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71998.log 2026-03-20T13:20:06.408 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71962.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.71962.log.gz 2026-03-20T13:20:06.420 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72032.log 2026-03-20T13:20:06.420 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71998.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.71998.log.gz 2026-03-20T13:20:06.430 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72066.log 2026-03-20T13:20:06.430 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72032.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.72032.log.gz 2026-03-20T13:20:06.435 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72163.log 2026-03-20T13:20:06.445 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72066.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.72066.log.gz 2026-03-20T13:20:06.445 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72260.log 2026-03-20T13:20:06.445 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72163.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.72163.log.gz 2026-03-20T13:20:06.450 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72357.log 2026-03-20T13:20:06.460 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72260.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.72260.log.gz 2026-03-20T13:20:06.460 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72622.log 2026-03-20T13:20:06.460 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72357.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.72357.log.gz 2026-03-20T13:20:06.474 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72656.log 2026-03-20T13:20:06.474 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72622.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.72622.log.gz 2026-03-20T13:20:06.481 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72690.log 2026-03-20T13:20:06.482 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72656.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.72656.log.gz 2026-03-20T13:20:06.484 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72793.log 2026-03-20T13:20:06.492 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72690.log: 83.1% -- replaced with /var/log/ceph/ceph-client.admin.72690.log.gz 2026-03-20T13:20:06.497 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72891.log 2026-03-20T13:20:06.498 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72793.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.72793.log.gz 2026-03-20T13:20:06.506 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72988.log 2026-03-20T13:20:06.508 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72891.log: 89.4% -- replaced with /var/log/ceph/ceph-client.admin.72891.log.gz 2026-03-20T13:20:06.516 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73022.log 2026-03-20T13:20:06.516 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72988.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.72988.log.gz 2026-03-20T13:20:06.522 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73058.log 2026-03-20T13:20:06.531 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73022.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73092.log 2026-03-20T13:20:06.531 INFO:teuthology.orchestra.run.vm00.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.73022.log.gz 2026-03-20T13:20:06.531 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73058.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.73058.log.gz 2026-03-20T13:20:06.533 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73126.log 2026-03-20T13:20:06.538 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73092.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.73092.log.gz 2026-03-20T13:20:06.546 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73223.log 2026-03-20T13:20:06.547 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73126.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.73126.log.gz 2026-03-20T13:20:06.547 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73320.log 2026-03-20T13:20:06.547 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73223.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.73223.log.gz 2026-03-20T13:20:06.562 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73417.log 2026-03-20T13:20:06.563 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73320.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.73320.log.gz 2026-03-20T13:20:06.564 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73514.log 2026-03-20T13:20:06.574 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73417.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73611.log 2026-03-20T13:20:06.574 INFO:teuthology.orchestra.run.vm00.stderr: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.73417.log.gz 2026-03-20T13:20:06.575 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73514.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.73514.log.gz 2026-03-20T13:20:06.579 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73708.log 2026-03-20T13:20:06.583 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73611.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.73611.log.gz 2026-03-20T13:20:06.588 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73805.log 2026-03-20T13:20:06.589 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73708.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.73708.log.gz 2026-03-20T13:20:06.600 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73902.log 2026-03-20T13:20:06.601 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73805.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.73805.log.gz 2026-03-20T13:20:06.610 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73999.log 2026-03-20T13:20:06.611 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73902.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.73902.log.gz 2026-03-20T13:20:06.620 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74096.log 2026-03-20T13:20:06.621 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73999.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.73999.log.gz 2026-03-20T13:20:06.629 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74193.log 2026-03-20T13:20:06.630 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74096.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.74096.log.gz 2026-03-20T13:20:06.639 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74290.log 2026-03-20T13:20:06.640 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74193.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.74193.log.gz 2026-03-20T13:20:06.649 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74387.log 2026-03-20T13:20:06.650 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74290.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.74290.log.gz 2026-03-20T13:20:06.659 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74484.log 2026-03-20T13:20:06.660 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74387.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.74387.log.gz 2026-03-20T13:20:06.669 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74581.log 2026-03-20T13:20:06.670 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74484.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.74484.log.gz 2026-03-20T13:20:06.679 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74678.log 2026-03-20T13:20:06.680 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74581.log: 85.3% -- replaced with /var/log/ceph/ceph-client.admin.74581.log.gz 2026-03-20T13:20:06.684 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74775.log 2026-03-20T13:20:06.694 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74872.log 2026-03-20T13:20:06.695 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74678.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.74678.log.gz 2026-03-20T13:20:06.695 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74775.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.74775.log.gz 2026-03-20T13:20:06.695 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74969.log 2026-03-20T13:20:06.700 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75066.log 2026-03-20T13:20:06.701 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74969.log: /var/log/ceph/ceph-client.admin.74872.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.74872.log.gz 2026-03-20T13:20:06.701 INFO:teuthology.orchestra.run.vm00.stderr: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.74969.log.gz 2026-03-20T13:20:06.725 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75163.log 2026-03-20T13:20:06.726 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75066.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.75066.log.gz 2026-03-20T13:20:06.735 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75260.log 2026-03-20T13:20:06.736 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75163.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.75163.log.gz 2026-03-20T13:20:06.745 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75357.log 2026-03-20T13:20:06.746 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75260.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.75260.log.gz 2026-03-20T13:20:06.755 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75454.log 2026-03-20T13:20:06.756 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75357.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.75357.log.gz 2026-03-20T13:20:06.766 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75551.log 2026-03-20T13:20:06.767 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75454.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.75454.log.gz 2026-03-20T13:20:06.776 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75648.log 2026-03-20T13:20:06.777 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75551.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.75551.log.gz 2026-03-20T13:20:06.783 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75745.log 2026-03-20T13:20:06.791 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75648.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.75648.log.gz 2026-03-20T13:20:06.793 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75842.log 2026-03-20T13:20:06.794 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75745.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.75745.log.gz 2026-03-20T13:20:06.806 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75939.log 2026-03-20T13:20:06.807 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75842.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.75842.log.gz 2026-03-20T13:20:06.816 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76036.log 2026-03-20T13:20:06.817 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75939.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.75939.log.gz 2026-03-20T13:20:06.829 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76133.log 2026-03-20T13:20:06.830 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76036.log: 85.4% -- replaced with /var/log/ceph/ceph-client.admin.76036.log.gz 2026-03-20T13:20:06.839 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76230.log 2026-03-20T13:20:06.840 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76133.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.76133.log.gz 2026-03-20T13:20:06.844 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76327.log 2026-03-20T13:20:06.849 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76230.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.76230.log.gz 2026-03-20T13:20:06.854 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76424.log 2026-03-20T13:20:06.855 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76327.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.76327.log.gz 2026-03-20T13:20:06.860 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76521.log 2026-03-20T13:20:06.862 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76424.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.76424.log.gz 2026-03-20T13:20:06.870 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76618.log 2026-03-20T13:20:06.871 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76521.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.76521.log.gz 2026-03-20T13:20:06.876 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76715.log 2026-03-20T13:20:06.878 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76618.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.76618.log.gz 2026-03-20T13:20:06.886 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76812.log 2026-03-20T13:20:06.887 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76715.log: 85.4% -- replaced with /var/log/ceph/ceph-client.admin.76715.log.gz 2026-03-20T13:20:06.891 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76909.log 2026-03-20T13:20:06.895 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76812.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.76812.log.gz 2026-03-20T13:20:06.901 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77006.log 2026-03-20T13:20:06.902 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76909.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.76909.log.gz 2026-03-20T13:20:06.911 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77103.log 2026-03-20T13:20:06.912 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77006.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.77006.log.gz 2026-03-20T13:20:06.925 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77200.log 2026-03-20T13:20:06.926 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77103.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.77103.log.gz 2026-03-20T13:20:06.939 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77297.log 2026-03-20T13:20:06.939 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77200.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.77200.log.gz 2026-03-20T13:20:06.953 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77394.log 2026-03-20T13:20:06.953 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77297.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.77297.log.gz 2026-03-20T13:20:06.968 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77491.log 2026-03-20T13:20:06.969 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77394.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.77394.log.gz 2026-03-20T13:20:06.982 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77588.log 2026-03-20T13:20:06.982 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77491.log: 85.3% -- replaced with /var/log/ceph/ceph-client.admin.77491.log.gz 2026-03-20T13:20:06.988 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77685.log 2026-03-20T13:20:06.990 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77588.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.77588.log.gz 2026-03-20T13:20:06.997 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77782.log 2026-03-20T13:20:06.998 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77685.log: 85.3% -- replaced with /var/log/ceph/ceph-client.admin.77685.log.gz 2026-03-20T13:20:07.003 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77879.log 2026-03-20T13:20:07.004 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77782.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.77782.log.gz 2026-03-20T13:20:07.014 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77976.log 2026-03-20T13:20:07.015 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77879.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.77879.log.gz 2026-03-20T13:20:07.019 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78073.log 2026-03-20T13:20:07.022 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77976.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.77976.log.gz 2026-03-20T13:20:07.030 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78170.log 2026-03-20T13:20:07.031 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78073.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.78073.log.gz 2026-03-20T13:20:07.035 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78267.log 2026-03-20T13:20:07.039 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78170.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.78170.log.gz 2026-03-20T13:20:07.046 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78364.log 2026-03-20T13:20:07.047 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78267.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.78267.log.gz 2026-03-20T13:20:07.051 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78461.log 2026-03-20T13:20:07.055 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78364.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.78364.log.gz 2026-03-20T13:20:07.062 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78558.log 2026-03-20T13:20:07.063 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78461.log: 85.3% -- replaced with /var/log/ceph/ceph-client.admin.78461.log.gz 2026-03-20T13:20:07.068 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78655.log 2026-03-20T13:20:07.069 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78558.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.78558.log.gz 2026-03-20T13:20:07.079 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78752.log 2026-03-20T13:20:07.080 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78655.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.78655.log.gz 2026-03-20T13:20:07.090 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78849.log 2026-03-20T13:20:07.091 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78752.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.78752.log.gz 2026-03-20T13:20:07.095 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78946.log 2026-03-20T13:20:07.097 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78849.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.78849.log.gz 2026-03-20T13:20:07.107 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79043.log 2026-03-20T13:20:07.108 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78946.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.78946.log.gz 2026-03-20T13:20:07.112 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79140.log 2026-03-20T13:20:07.115 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79043.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.79043.log.gz 2026-03-20T13:20:07.125 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79237.log 2026-03-20T13:20:07.126 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79140.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.79140.log.gz 2026-03-20T13:20:07.132 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.313090.log 2026-03-20T13:20:07.134 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79237.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.79237.log.gz 2026-03-20T13:20:07.142 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.313124.log 2026-03-20T13:20:07.143 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.313090.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.313090.log.gz 2026-03-20T13:20:07.147 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.313158.log 2026-03-20T13:20:07.148 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.313124.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.313124.log.gz 2026-03-20T13:20:07.158 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.313261.log 2026-03-20T13:20:07.159 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.313158.log: 83.1% -- replaced with /var/log/ceph/ceph-client.admin.313158.log.gz 2026-03-20T13:20:07.163 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.313361.log 2026-03-20T13:20:07.164 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.313261.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.313261.log.gz 2026-03-20T13:20:07.174 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.313458.log 2026-03-20T13:20:07.179 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.313361.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.313492.log 2026-03-20T13:20:07.180 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.313458.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.313458.log.gz 2026-03-20T13:20:07.192 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.313528.log 2026-03-20T13:20:07.192 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.313492.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.313492.log.gz 2026-03-20T13:20:07.206 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.313562.log 2026-03-20T13:20:07.207 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.313528.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.313528.log.gz 2026-03-20T13:20:07.220 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.0.540682.log 2026-03-20T13:20:07.220 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.313562.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.313562.log.gz 2026-03-20T13:20:07.235 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.0.540856.log 2026-03-20T13:20:07.235 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.540682.log: 10.2% -- replaced with /var/log/ceph/ceph-client.0.540682.log.gz 2026-03-20T13:20:07.253 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.540915.log 2026-03-20T13:20:07.253 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.540856.log: 9.2% -- replaced with /var/log/ceph/ceph-client.0.540856.log.gz 2026-03-20T13:20:07.267 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.540915.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.540915.log.gz 2026-03-20T13:20:07.530 INFO:teuthology.orchestra.run.vm00.stderr: 86.8% -- replaced with /var/log/ceph/ceph-client.admin.313361.log.gz 2026-03-20T13:20:07.628 INFO:teuthology.orchestra.run.vm00.stderr: 92.2% -- replaced with /var/log/ceph/ops-log-ceph-client.0.log.gz 2026-03-20T13:20:37.584 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-20T13:20:37.585 INFO:teuthology.orchestra.run.vm00.stderr:gzip: /var/log/ceph/ceph-osd.2.log.gz: No space left on device 2026-03-20T13:20:37.585 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-20T13:20:37.585 INFO:teuthology.orchestra.run.vm00.stderr:gzip: /var/log/ceph/ceph-osd.1.log.gz: No space left on device 2026-03-20T13:20:37.590 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-20T13:20:37.590 INFO:teuthology.orchestra.run.vm00.stderr:gzip: /var/log/ceph/ceph-osd.3.log.gz: No space left on device 2026-03-20T13:20:37.590 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-20T13:20:37.590 INFO:teuthology.orchestra.run.vm00.stderr:gzip: /var/log/ceph/rgw.ceph.client.0.log.gz: No space left on device 2026-03-20T13:20:37.590 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-20T13:20:37.590 INFO:teuthology.orchestra.run.vm00.stderr:gzip: /var/log/ceph/ceph-osd.0.log.gz: No space left on device 2026-03-20T13:20:37.620 DEBUG:teuthology.orchestra.run:got remote process result: 123 2026-03-20T13:20:37.620 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-20T13:20:37.620 INFO:teuthology.orchestra.run.vm00.stderr:real 0m32.616s 2026-03-20T13:20:37.620 INFO:teuthology.orchestra.run.vm00.stderr:user 2m0.133s 2026-03-20T13:20:37.620 INFO:teuthology.orchestra.run.vm00.stderr:sys 0m7.077s 2026-03-20T13:20:37.620 ERROR:teuthology.run_tasks:Manager failed: ceph Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph.py", line 2001, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph.py", line 1181, in cluster yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph.py", line 2011, in task ctx.managers[config['cluster']].wait_for_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 2919, in wait_for_clean num_active_clean = self.get_num_active_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 2698, in get_num_active_clean pgs = self.get_pg_stats() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 2464, in get_pg_stats out = self.raw_cluster_cmd('pg', 'dump', '--format=json') File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 1696, in raw_cluster_cmd return self.run_cluster_cmd(**kwargs).stdout.getvalue() File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph_manager.py", line 1687, in run_cluster_cmd return self.controller.run(**kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph.py", line 1996, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/ceph.py", line 263, in ceph_log run.wait( File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 123: "time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose --" 2026-03-20T13:20:37.620 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-20T13:20:37.624 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/teuthology/teuthology/task/install/__init__.py", line 644, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' 2026-03-20T13:20:37.624 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-20T13:20:37.624 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-20T13:20:37.663 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-20T13:20:37.664 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-20T13:20:37.697 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-20T13:20:37.697 DEBUG:teuthology.orchestra.run.vm00:> 2026-03-20T13:20:37.697 DEBUG:teuthology.orchestra.run.vm00:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-20T13:20:37.697 DEBUG:teuthology.orchestra.run.vm00:> sudo yum -y remove $d || true 2026-03-20T13:20:37.697 DEBUG:teuthology.orchestra.run.vm00:> done 2026-03-20T13:20:37.703 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-20T13:20:37.703 DEBUG:teuthology.orchestra.run.vm06:> 2026-03-20T13:20:37.703 DEBUG:teuthology.orchestra.run.vm06:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-20T13:20:37.703 DEBUG:teuthology.orchestra.run.vm06:> sudo yum -y remove $d || true 2026-03-20T13:20:37.703 DEBUG:teuthology.orchestra.run.vm06:> done 2026-03-20T13:20:37.707 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-20T13:20:37.707 DEBUG:teuthology.orchestra.run.vm09:> 2026-03-20T13:20:37.707 DEBUG:teuthology.orchestra.run.vm09:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-20T13:20:37.707 DEBUG:teuthology.orchestra.run.vm09:> sudo yum -y remove $d || true 2026-03-20T13:20:37.707 DEBUG:teuthology.orchestra.run.vm09:> done 2026-03-20T13:20:37.908 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-20T13:20:37.908 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-20T13:20:37.908 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-20T13:20:37.908 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-20T13:20:37.908 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-20T13:20:37.908 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 103 M 2026-03-20T13:20:37.908 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-20T13:20:37.908 INFO:teuthology.orchestra.run.vm09.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-20T13:20:37.908 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:37.908 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-20T13:20:37.908 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-20T13:20:37.908 INFO:teuthology.orchestra.run.vm09.stdout:Remove 2 Packages 2026-03-20T13:20:37.908 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:37.908 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 103 M 2026-03-20T13:20:37.909 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-20T13:20:37.910 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-20T13:20:37.910 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-20T13:20:37.929 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-20T13:20:38.382 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-20T13:20:38.385 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:38.608 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:38.650 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-20T13:20:38.674 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 1/2 2026-03-20T13:20:38.674 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T13:20:38.674 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-20T13:20:38.674 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-20T13:20:38.674 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-20T13:20:38.674 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:38.683 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 1/2 2026-03-20T13:20:38.736 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:38.856 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:38.883 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 1/2 2026-03-20T13:20:38.908 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-20T13:20:38.979 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:38.989 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-20T13:20:38.989 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 1/2 2026-03-20T13:20:39.033 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-20T13:20:39.033 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:39.033 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-20T13:20:39.034 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-20T13:20:39.034 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:39.034 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-20T13:20:39.117 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:39.193 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-20T13:20:39.194 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-20T13:20:39.194 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-03-20T13:20:39.194 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-20T13:20:39.194 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-03-20T13:20:39.194 INFO:teuthology.orchestra.run.vm00.stdout: ceph-radosgw x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 103 M 2026-03-20T13:20:39.194 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-03-20T13:20:39.194 INFO:teuthology.orchestra.run.vm00.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-20T13:20:39.194 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:39.194 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-20T13:20:39.194 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-20T13:20:39.194 INFO:teuthology.orchestra.run.vm00.stdout:Remove 2 Packages 2026-03-20T13:20:39.194 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:39.194 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 103 M 2026-03-20T13:20:39.194 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-20T13:20:39.200 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-20T13:20:39.200 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-20T13:20:39.218 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-20T13:20:39.218 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-20T13:20:39.246 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:39.253 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-20T13:20:39.279 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 1/2 2026-03-20T13:20:39.279 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T13:20:39.279 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-20T13:20:39.279 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-20T13:20:39.279 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-20T13:20:39.279 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:39.284 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 1/2 2026-03-20T13:20:39.294 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 1/2 2026-03-20T13:20:39.309 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-20T13:20:39.318 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-20T13:20:39.318 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-20T13:20:39.318 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-20T13:20:39.318 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-20T13:20:39.318 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-20T13:20:39.318 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 362 M 2026-03-20T13:20:39.318 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-20T13:20:39.318 INFO:teuthology.orchestra.run.vm09.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-20T13:20:39.318 INFO:teuthology.orchestra.run.vm09.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-20T13:20:39.318 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-20T13:20:39.318 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:39.318 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-20T13:20:39.318 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-20T13:20:39.318 INFO:teuthology.orchestra.run.vm09.stdout:Remove 4 Packages 2026-03-20T13:20:39.318 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:39.318 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 364 M 2026-03-20T13:20:39.318 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-20T13:20:39.321 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-20T13:20:39.321 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-20T13:20:39.345 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-20T13:20:39.345 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-20T13:20:39.371 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:39.377 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-20T13:20:39.378 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 1/2 2026-03-20T13:20:39.424 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-20T13:20:39.424 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:39.424 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-03-20T13:20:39.424 INFO:teuthology.orchestra.run.vm00.stdout: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-20T13:20:39.424 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:39.424 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-20T13:20:39.427 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-20T13:20:39.434 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 1/4 2026-03-20T13:20:39.436 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-20T13:20:39.440 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-20T13:20:39.455 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-20T13:20:39.505 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:39.522 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-20T13:20:39.522 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 1/4 2026-03-20T13:20:39.522 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-20T13:20:39.522 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-20T13:20:39.584 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-20T13:20:39.584 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:39.584 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-20T13:20:39.584 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-20T13:20:39.584 INFO:teuthology.orchestra.run.vm09.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-20T13:20:39.584 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:39.584 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-20T13:20:39.630 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-20T13:20:39.631 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-20T13:20:39.631 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-03-20T13:20:39.631 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-20T13:20:39.631 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-03-20T13:20:39.631 INFO:teuthology.orchestra.run.vm00.stdout: ceph-test x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 362 M 2026-03-20T13:20:39.631 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-03-20T13:20:39.631 INFO:teuthology.orchestra.run.vm00.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-20T13:20:39.631 INFO:teuthology.orchestra.run.vm00.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-20T13:20:39.631 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-20T13:20:39.631 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:39.631 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-20T13:20:39.631 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-20T13:20:39.631 INFO:teuthology.orchestra.run.vm00.stdout:Remove 4 Packages 2026-03-20T13:20:39.631 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:39.631 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 364 M 2026-03-20T13:20:39.631 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-20T13:20:39.634 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-20T13:20:39.634 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-20T13:20:39.647 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:39.660 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-20T13:20:39.661 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-20T13:20:39.721 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-20T13:20:39.727 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 1/4 2026-03-20T13:20:39.729 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-20T13:20:39.733 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-20T13:20:39.748 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-20T13:20:39.782 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:39.789 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-20T13:20:39.789 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-20T13:20:39.790 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-20T13:20:39.790 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-20T13:20:39.790 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-20T13:20:39.790 INFO:teuthology.orchestra.run.vm09.stdout: ceph x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 0 2026-03-20T13:20:39.790 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-20T13:20:39.790 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 6.8 M 2026-03-20T13:20:39.790 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 19 M 2026-03-20T13:20:39.790 INFO:teuthology.orchestra.run.vm09.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-20T13:20:39.790 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-20T13:20:39.790 INFO:teuthology.orchestra.run.vm09.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-20T13:20:39.790 INFO:teuthology.orchestra.run.vm09.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-20T13:20:39.790 INFO:teuthology.orchestra.run.vm09.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-20T13:20:39.790 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:39.790 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-20T13:20:39.790 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-20T13:20:39.790 INFO:teuthology.orchestra.run.vm09.stdout:Remove 8 Packages 2026-03-20T13:20:39.790 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:39.790 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 28 M 2026-03-20T13:20:39.790 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-20T13:20:39.793 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-20T13:20:39.793 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-20T13:20:39.817 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-20T13:20:39.817 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 1/4 2026-03-20T13:20:39.817 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-20T13:20:39.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-20T13:20:39.818 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-20T13:20:39.818 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-20T13:20:39.859 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-20T13:20:39.864 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 1/8 2026-03-20T13:20:39.864 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-20T13:20:39.864 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:39.864 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-03-20T13:20:39.864 INFO:teuthology.orchestra.run.vm00.stdout: ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-20T13:20:39.864 INFO:teuthology.orchestra.run.vm00.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-20T13:20:39.864 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:39.864 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-20T13:20:39.867 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-20T13:20:39.870 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-20T13:20:39.872 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-20T13:20:39.875 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-20T13:20:39.878 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-20T13:20:39.899 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 7/8 2026-03-20T13:20:39.899 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T13:20:39.899 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-20T13:20:39.899 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-20T13:20:39.899 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-20T13:20:39.899 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:39.900 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 7/8 2026-03-20T13:20:39.907 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 7/8 2026-03-20T13:20:39.918 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:39.925 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/8 2026-03-20T13:20:39.925 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T13:20:39.925 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-20T13:20:39.925 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-20T13:20:39.925 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-20T13:20:39.925 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:39.927 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/8 2026-03-20T13:20:40.017 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/8 2026-03-20T13:20:40.017 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 1/8 2026-03-20T13:20:40.017 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 2/8 2026-03-20T13:20:40.017 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 3/8 2026-03-20T13:20:40.017 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-20T13:20:40.017 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-20T13:20:40.017 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-20T13:20:40.017 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-20T13:20:40.042 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:40.065 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-20T13:20:40.065 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:40.065 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-20T13:20:40.065 INFO:teuthology.orchestra.run.vm09.stdout: ceph-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T13:20:40.065 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T13:20:40.065 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T13:20:40.065 INFO:teuthology.orchestra.run.vm09.stdout: lua-5.4.4-4.el9.x86_64 2026-03-20T13:20:40.065 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-20T13:20:40.065 INFO:teuthology.orchestra.run.vm09.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-20T13:20:40.065 INFO:teuthology.orchestra.run.vm09.stdout: unzip-6.0-59.el9.x86_64 2026-03-20T13:20:40.066 INFO:teuthology.orchestra.run.vm09.stdout: zip-3.0-35.el9.x86_64 2026-03-20T13:20:40.066 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:40.066 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-20T13:20:40.085 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-20T13:20:40.086 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-20T13:20:40.086 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-03-20T13:20:40.086 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-20T13:20:40.086 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-03-20T13:20:40.086 INFO:teuthology.orchestra.run.vm00.stdout: ceph x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 0 2026-03-20T13:20:40.086 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-03-20T13:20:40.086 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 6.8 M 2026-03-20T13:20:40.086 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 19 M 2026-03-20T13:20:40.086 INFO:teuthology.orchestra.run.vm00.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-20T13:20:40.086 INFO:teuthology.orchestra.run.vm00.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-20T13:20:40.086 INFO:teuthology.orchestra.run.vm00.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-20T13:20:40.086 INFO:teuthology.orchestra.run.vm00.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-20T13:20:40.086 INFO:teuthology.orchestra.run.vm00.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-20T13:20:40.086 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:40.086 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-20T13:20:40.086 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-20T13:20:40.086 INFO:teuthology.orchestra.run.vm00.stdout:Remove 8 Packages 2026-03-20T13:20:40.086 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:40.087 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 28 M 2026-03-20T13:20:40.087 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-20T13:20:40.090 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-20T13:20:40.090 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-20T13:20:40.121 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-20T13:20:40.121 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-20T13:20:40.165 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-20T13:20:40.171 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 1/8 2026-03-20T13:20:40.175 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-20T13:20:40.175 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:40.177 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-20T13:20:40.181 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-20T13:20:40.184 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-20T13:20:40.187 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-20T13:20:40.205 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 7/8 2026-03-20T13:20:40.205 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T13:20:40.205 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-20T13:20:40.205 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-20T13:20:40.205 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-20T13:20:40.205 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:40.206 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 7/8 2026-03-20T13:20:40.213 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 7/8 2026-03-20T13:20:40.233 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/8 2026-03-20T13:20:40.233 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T13:20:40.233 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-20T13:20:40.233 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-20T13:20:40.233 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-20T13:20:40.233 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:40.235 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/8 2026-03-20T13:20:40.265 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout:=========================================================================================== 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout:=========================================================================================== 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 24 M 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 447 k 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 2.9 M 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 940 k 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 140 M 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 66 M 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 567 k 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 54 M 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 1.4 M 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 11 M 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 98 M 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 996 k 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 1.6 M 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 59 k 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 138 k 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 409 k 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 792 k 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-20T13:20:40.271 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 855 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyparsing noarch 2.4.7-9.el9 @baseos 635 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-20T13:20:40.272 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:40.273 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-20T13:20:40.273 INFO:teuthology.orchestra.run.vm09.stdout:=========================================================================================== 2026-03-20T13:20:40.273 INFO:teuthology.orchestra.run.vm09.stdout:Remove 98 Packages 2026-03-20T13:20:40.273 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:40.273 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 666 M 2026-03-20T13:20:40.273 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-20T13:20:40.299 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-20T13:20:40.299 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-20T13:20:40.308 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:40.324 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/8 2026-03-20T13:20:40.324 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 1/8 2026-03-20T13:20:40.324 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 2/8 2026-03-20T13:20:40.324 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 3/8 2026-03-20T13:20:40.324 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-20T13:20:40.324 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-20T13:20:40.324 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-20T13:20:40.324 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-20T13:20:40.372 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-20T13:20:40.372 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:40.372 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-03-20T13:20:40.372 INFO:teuthology.orchestra.run.vm00.stdout: ceph-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T13:20:40.372 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T13:20:40.372 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T13:20:40.372 INFO:teuthology.orchestra.run.vm00.stdout: lua-5.4.4-4.el9.x86_64 2026-03-20T13:20:40.372 INFO:teuthology.orchestra.run.vm00.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-20T13:20:40.372 INFO:teuthology.orchestra.run.vm00.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-20T13:20:40.372 INFO:teuthology.orchestra.run.vm00.stdout: unzip-6.0-59.el9.x86_64 2026-03-20T13:20:40.372 INFO:teuthology.orchestra.run.vm00.stdout: zip-3.0-35.el9.x86_64 2026-03-20T13:20:40.372 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:40.372 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-20T13:20:40.406 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-20T13:20:40.406 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-20T13:20:40.442 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:40.550 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-20T13:20:40.551 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 1/98 2026-03-20T13:20:40.559 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 1/98 2026-03-20T13:20:40.571 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:40.579 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2/98 2026-03-20T13:20:40.579 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T13:20:40.579 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-20T13:20:40.579 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-20T13:20:40.579 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-20T13:20:40.579 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:40.579 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2/98 2026-03-20T13:20:40.588 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-20T13:20:40.594 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2/98 2026-03-20T13:20:40.594 INFO:teuthology.orchestra.run.vm00.stdout:=========================================================================================== 2026-03-20T13:20:40.594 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-03-20T13:20:40.594 INFO:teuthology.orchestra.run.vm00.stdout:=========================================================================================== 2026-03-20T13:20:40.594 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-03-20T13:20:40.594 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 24 M 2026-03-20T13:20:40.594 INFO:teuthology.orchestra.run.vm00.stdout:Removing dependent packages: 2026-03-20T13:20:40.594 INFO:teuthology.orchestra.run.vm00.stdout: ceph-immutable-object-cache x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 447 k 2026-03-20T13:20:40.594 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 2.9 M 2026-03-20T13:20:40.594 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 940 k 2026-03-20T13:20:40.594 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 140 M 2026-03-20T13:20:40.594 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-diskprediction-local noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 66 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-rook noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 567 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: ceph-osd x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 54 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: ceph-volume noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 1.4 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: rbd-mirror x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 11 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 98 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: ceph-grafana-dashboards noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 996 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 1.6 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: ceph-prometheus-alerts noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 59 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: ceph-selinux x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 138 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: libcephsqlite x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 409 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 792 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 855 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-20T13:20:40.595 INFO:teuthology.orchestra.run.vm00.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyparsing noarch 2.4.7-9.el9 @baseos 635 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout:=========================================================================================== 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout:Remove 98 Packages 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 666 M 2026-03-20T13:20:40.596 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-20T13:20:40.624 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-20T13:20:40.624 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-20T13:20:40.655 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9.n 3/98 2026-03-20T13:20:40.656 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.noar 4/98 2026-03-20T13:20:40.692 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:40.716 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.noar 4/98 2026-03-20T13:20:40.726 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/98 2026-03-20T13:20:40.731 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/98 2026-03-20T13:20:40.731 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noarch 7/98 2026-03-20T13:20:40.740 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-20T13:20:40.740 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-20T13:20:40.742 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noarch 7/98 2026-03-20T13:20:40.748 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/98 2026-03-20T13:20:40.752 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/98 2026-03-20T13:20:40.760 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/98 2026-03-20T13:20:40.764 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/98 2026-03-20T13:20:40.784 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 12/98 2026-03-20T13:20:40.784 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T13:20:40.784 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-20T13:20:40.784 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-20T13:20:40.784 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-20T13:20:40.784 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:40.789 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 12/98 2026-03-20T13:20:40.797 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 12/98 2026-03-20T13:20:40.812 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:40.813 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 13/98 2026-03-20T13:20:40.813 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T13:20:40.813 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-20T13:20:40.813 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:40.821 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 13/98 2026-03-20T13:20:40.832 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 13/98 2026-03-20T13:20:40.834 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/98 2026-03-20T13:20:40.839 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/98 2026-03-20T13:20:40.843 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/98 2026-03-20T13:20:40.851 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/98 2026-03-20T13:20:40.856 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/98 2026-03-20T13:20:40.868 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 19/98 2026-03-20T13:20:40.875 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 20/98 2026-03-20T13:20:40.907 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-20T13:20:40.907 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 1/98 2026-03-20T13:20:40.911 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 21/98 2026-03-20T13:20:40.915 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 1/98 2026-03-20T13:20:40.918 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 22/98 2026-03-20T13:20:40.921 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 23/98 2026-03-20T13:20:40.930 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 24/98 2026-03-20T13:20:40.934 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2/98 2026-03-20T13:20:40.934 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T13:20:40.934 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-20T13:20:40.934 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-20T13:20:40.934 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-20T13:20:40.934 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:40.934 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2/98 2026-03-20T13:20:40.938 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:40.942 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 25/98 2026-03-20T13:20:40.942 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f841 26/98 2026-03-20T13:20:40.949 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2/98 2026-03-20T13:20:40.950 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f841 26/98 2026-03-20T13:20:41.009 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9.n 3/98 2026-03-20T13:20:41.009 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.noar 4/98 2026-03-20T13:20:41.047 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 27/98 2026-03-20T13:20:41.059 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:41.064 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 28/98 2026-03-20T13:20:41.075 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.noar 4/98 2026-03-20T13:20:41.078 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 29/98 2026-03-20T13:20:41.078 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-20T13:20:41.078 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:41.080 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 29/98 2026-03-20T13:20:41.106 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/98 2026-03-20T13:20:41.153 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/98 2026-03-20T13:20:41.153 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noarch 7/98 2026-03-20T13:20:41.175 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:41.184 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 29/98 2026-03-20T13:20:41.186 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noarch 7/98 2026-03-20T13:20:41.221 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 30/98 2026-03-20T13:20:41.223 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/98 2026-03-20T13:20:41.256 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 31/98 2026-03-20T13:20:41.257 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/98 2026-03-20T13:20:41.267 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 32/98 2026-03-20T13:20:41.270 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 33/98 2026-03-20T13:20:41.273 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/98 2026-03-20T13:20:41.285 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/98 2026-03-20T13:20:41.292 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 34/98 2026-03-20T13:20:41.292 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T13:20:41.292 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-20T13:20:41.292 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-20T13:20:41.292 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-20T13:20:41.292 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:41.294 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 34/98 2026-03-20T13:20:41.294 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:41.303 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 34/98 2026-03-20T13:20:41.306 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 35/98 2026-03-20T13:20:41.307 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 12/98 2026-03-20T13:20:41.307 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T13:20:41.307 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-20T13:20:41.307 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-20T13:20:41.307 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-20T13:20:41.307 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:41.309 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 36/98 2026-03-20T13:20:41.311 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 12/98 2026-03-20T13:20:41.312 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 37/98 2026-03-20T13:20:41.314 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 38/98 2026-03-20T13:20:41.318 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 39/98 2026-03-20T13:20:41.321 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 12/98 2026-03-20T13:20:41.322 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 40/98 2026-03-20T13:20:41.327 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 41/98 2026-03-20T13:20:41.337 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 13/98 2026-03-20T13:20:41.337 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T13:20:41.337 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-20T13:20:41.337 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:41.346 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 13/98 2026-03-20T13:20:41.356 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 13/98 2026-03-20T13:20:41.358 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/98 2026-03-20T13:20:41.364 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/98 2026-03-20T13:20:41.369 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/98 2026-03-20T13:20:41.372 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 42/98 2026-03-20T13:20:41.378 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/98 2026-03-20T13:20:41.383 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/98 2026-03-20T13:20:41.385 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 43/98 2026-03-20T13:20:41.387 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 44/98 2026-03-20T13:20:41.393 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 45/98 2026-03-20T13:20:41.394 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 19/98 2026-03-20T13:20:41.395 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 46/98 2026-03-20T13:20:41.399 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 47/98 2026-03-20T13:20:41.401 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 20/98 2026-03-20T13:20:41.401 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 48/98 2026-03-20T13:20:41.415 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:41.426 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-712.g70f8415b 49/98 2026-03-20T13:20:41.426 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T13:20:41.426 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-20T13:20:41.426 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:41.427 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-immutable-object-cache-2:20.2.0-712.g70f8415b 49/98 2026-03-20T13:20:41.433 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 21/98 2026-03-20T13:20:41.437 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-712.g70f8415b 49/98 2026-03-20T13:20:41.439 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 50/98 2026-03-20T13:20:41.441 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 22/98 2026-03-20T13:20:41.441 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 51/98 2026-03-20T13:20:41.444 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 23/98 2026-03-20T13:20:41.444 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-ply-3.11-14.el9.noarch 52/98 2026-03-20T13:20:41.446 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 53/98 2026-03-20T13:20:41.448 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 54/98 2026-03-20T13:20:41.450 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 55/98 2026-03-20T13:20:41.453 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 24/98 2026-03-20T13:20:41.453 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 56/98 2026-03-20T13:20:41.456 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 57/98 2026-03-20T13:20:41.459 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyparsing-2.4.7-9.el9.noarch 58/98 2026-03-20T13:20:41.460 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 25/98 2026-03-20T13:20:41.460 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f841 26/98 2026-03-20T13:20:41.466 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/98 2026-03-20T13:20:41.467 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f841 26/98 2026-03-20T13:20:41.470 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/98 2026-03-20T13:20:41.472 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/98 2026-03-20T13:20:41.475 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/98 2026-03-20T13:20:41.477 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/98 2026-03-20T13:20:41.483 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/98 2026-03-20T13:20:41.486 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/98 2026-03-20T13:20:41.492 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 66/98 2026-03-20T13:20:41.495 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 67/98 2026-03-20T13:20:41.497 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 68/98 2026-03-20T13:20:41.504 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 69/98 2026-03-20T13:20:41.508 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 70/98 2026-03-20T13:20:41.511 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 71/98 2026-03-20T13:20:41.519 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 72/98 2026-03-20T13:20:41.539 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:41.551 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 73/98 2026-03-20T13:20:41.556 DEBUG:teuthology.orchestra.run.vm06:> sudo yum clean all 2026-03-20T13:20:41.570 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 27/98 2026-03-20T13:20:41.581 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 74/98 2026-03-20T13:20:41.614 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 28/98 2026-03-20T13:20:41.618 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 75/98 2026-03-20T13:20:41.629 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 29/98 2026-03-20T13:20:41.629 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-20T13:20:41.629 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:41.630 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 29/98 2026-03-20T13:20:41.649 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.el9 76/98 2026-03-20T13:20:41.660 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el9. 77/98 2026-03-20T13:20:41.666 INFO:teuthology.orchestra.run.vm06.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T13:20:41.680 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 78/98 2026-03-20T13:20:41.680 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-20T13:20:41.680 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-20T13:20:41.680 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:41.686 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T13:20:41.687 ERROR:teuthology.run_tasks:Manager failed: install Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/task/install/__init__.py", line 220, in install yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/teuthology/teuthology/task/install/__init__.py", line 644, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/teuthology/teuthology/task/install/__init__.py", line 640, in task with contextutil.nested( File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/teuthology/teuthology/task/install/__init__.py", line 222, in install remove_packages(ctx, config, package_list) File "/home/teuthos/teuthology/teuthology/task/install/__init__.py", line 103, in remove_packages with parallel() as p: File "/home/teuthos/teuthology/teuthology/parallel.py", line 84, in __exit__ for result in self: File "/home/teuthos/teuthology/teuthology/parallel.py", line 98, in __next__ resurrect_traceback(result) File "/home/teuthos/teuthology/teuthology/parallel.py", line 30, in resurrect_traceback raise exc.exc_info[1] File "/home/teuthos/teuthology/teuthology/parallel.py", line 23, in capture_traceback return func(*args, **kwargs) File "/home/teuthos/teuthology/teuthology/task/install/rpm.py", line 43, in _remove remote.run(args='sudo yum clean all') File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm06 with status 1: 'sudo yum clean all' 2026-03-20T13:20:41.687 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-20T13:20:41.694 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-20T13:20:41.694 DEBUG:teuthology.orchestra.run.vm00:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-20T13:20:41.694 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 78/98 2026-03-20T13:20:41.694 INFO:teuthology.orchestra.run.vm09.stdout:warning: file /etc/logrotate.d/ceph: remove failed: No such file or directory 2026-03-20T13:20:41.694 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-20T13:20:41.696 DEBUG:teuthology.orchestra.run.vm06:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-20T13:20:41.697 DEBUG:teuthology.orchestra.run.vm09:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-20T13:20:41.708 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 29/98 2026-03-20T13:20:41.710 INFO:teuthology.orchestra.run.vm00.stderr:bash: line 1: ntpq: command not found 2026-03-20T13:20:41.710 INFO:teuthology.orchestra.run.vm06.stderr:bash: line 1: ntpq: command not found 2026-03-20T13:20:41.715 INFO:teuthology.orchestra.run.vm09.stderr:bash: line 1: ntpq: command not found 2026-03-20T13:20:41.718 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 78/98 2026-03-20T13:20:41.718 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 79/98 2026-03-20T13:20:41.726 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 30/98 2026-03-20T13:20:41.736 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 31/98 2026-03-20T13:20:41.738 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 32/98 2026-03-20T13:20:41.740 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 79/98 2026-03-20T13:20:41.741 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 33/98 2026-03-20T13:20:41.747 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 80/98 2026-03-20T13:20:41.750 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x86 81/98 2026-03-20T13:20:41.752 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 82/98 2026-03-20T13:20:41.752 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 83/98 2026-03-20T13:20:41.762 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 34/98 2026-03-20T13:20:41.762 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T13:20:41.762 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-20T13:20:41.762 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-20T13:20:41.762 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-20T13:20:41.762 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:41.763 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 34/98 2026-03-20T13:20:41.772 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 34/98 2026-03-20T13:20:41.776 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 35/98 2026-03-20T13:20:41.778 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 36/98 2026-03-20T13:20:41.781 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 37/98 2026-03-20T13:20:41.783 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 38/98 2026-03-20T13:20:41.787 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 39/98 2026-03-20T13:20:41.791 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 40/98 2026-03-20T13:20:41.796 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 41/98 2026-03-20T13:20:41.849 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 42/98 2026-03-20T13:20:41.860 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 43/98 2026-03-20T13:20:41.862 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 44/98 2026-03-20T13:20:41.865 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 45/98 2026-03-20T13:20:41.867 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 46/98 2026-03-20T13:20:41.870 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 47/98 2026-03-20T13:20:41.873 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 48/98 2026-03-20T13:20:41.873 INFO:teuthology.orchestra.run.vm00.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-20T13:20:41.873 INFO:teuthology.orchestra.run.vm00.stdout:=============================================================================== 2026-03-20T13:20:41.873 INFO:teuthology.orchestra.run.vm00.stdout:^+ stratum2-4.NTP.TechFak.U> 2 8 377 52 +690us[ +690us] +/- 18ms 2026-03-20T13:20:41.873 INFO:teuthology.orchestra.run.vm00.stdout:^* static.buzo.eu 2 8 377 251 -654us[ -678us] +/- 16ms 2026-03-20T13:20:41.873 INFO:teuthology.orchestra.run.vm00.stdout:^+ alpha.rueckgr.at 2 7 377 52 -450us[ -450us] +/- 47ms 2026-03-20T13:20:41.873 INFO:teuthology.orchestra.run.vm00.stdout:^+ butterfly.post-peine.de 2 8 377 115 +251us[ +251us] +/- 35ms 2026-03-20T13:20:41.873 INFO:teuthology.orchestra.run.vm06.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-20T13:20:41.873 INFO:teuthology.orchestra.run.vm06.stdout:=============================================================================== 2026-03-20T13:20:41.873 INFO:teuthology.orchestra.run.vm06.stdout:^- stratum2-4.NTP.TechFak.U> 2 7 377 54 +1119us[+1119us] +/- 18ms 2026-03-20T13:20:41.873 INFO:teuthology.orchestra.run.vm06.stdout:^* static.buzo.eu 2 8 377 115 -288us[ -296us] +/- 17ms 2026-03-20T13:20:41.873 INFO:teuthology.orchestra.run.vm06.stdout:^+ alpha.rueckgr.at 2 7 377 119 -45us[ -54us] +/- 46ms 2026-03-20T13:20:41.873 INFO:teuthology.orchestra.run.vm06.stdout:^+ butterfly.post-peine.de 2 7 377 120 +595us[ +587us] +/- 35ms 2026-03-20T13:20:41.874 INFO:teuthology.orchestra.run.vm09.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-20T13:20:41.874 INFO:teuthology.orchestra.run.vm09.stdout:=============================================================================== 2026-03-20T13:20:41.874 INFO:teuthology.orchestra.run.vm09.stdout:^+ alpha.rueckgr.at 2 8 377 241 -23us[ -30us] +/- 44ms 2026-03-20T13:20:41.874 INFO:teuthology.orchestra.run.vm09.stdout:^+ butterfly.post-peine.de 2 8 377 119 +601us[ +601us] +/- 35ms 2026-03-20T13:20:41.874 INFO:teuthology.orchestra.run.vm09.stdout:^- stratum2-4.NTP.TechFak.U> 2 7 377 49 +1012us[+1012us] +/- 18ms 2026-03-20T13:20:41.874 INFO:teuthology.orchestra.run.vm09.stdout:^* static.buzo.eu 2 8 377 120 -253us[ -262us] +/- 17ms 2026-03-20T13:20:41.874 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-20T13:20:41.878 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-20T13:20:41.878 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-20T13:20:41.881 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-20T13:20:41.884 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-20T13:20:41.886 INFO:teuthology.task.internal:Duration was 2631.582152 seconds 2026-03-20T13:20:41.886 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-20T13:20:41.889 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-20T13:20:41.889 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-20T13:20:41.895 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-712.g70f8415b 49/98 2026-03-20T13:20:41.895 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T13:20:41.895 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-20T13:20:41.895 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:41.896 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-20T13:20:41.896 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-immutable-object-cache-2:20.2.0-712.g70f8415b 49/98 2026-03-20T13:20:41.906 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-712.g70f8415b 49/98 2026-03-20T13:20:41.907 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 50/98 2026-03-20T13:20:41.909 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 51/98 2026-03-20T13:20:41.913 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-ply-3.11-14.el9.noarch 52/98 2026-03-20T13:20:41.916 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-20T13:20:41.917 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 53/98 2026-03-20T13:20:41.919 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 54/98 2026-03-20T13:20:41.922 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 55/98 2026-03-20T13:20:41.925 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 56/98 2026-03-20T13:20:41.928 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 57/98 2026-03-20T13:20:41.932 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pyparsing-2.4.7-9.el9.noarch 58/98 2026-03-20T13:20:41.938 INFO:teuthology.orchestra.run.vm00.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-20T13:20:41.941 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/98 2026-03-20T13:20:41.948 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/98 2026-03-20T13:20:41.950 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/98 2026-03-20T13:20:41.953 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/98 2026-03-20T13:20:41.956 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/98 2026-03-20T13:20:41.957 INFO:teuthology.orchestra.run.vm09.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-20T13:20:41.957 INFO:teuthology.orchestra.run.vm06.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-20T13:20:41.962 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/98 2026-03-20T13:20:41.966 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/98 2026-03-20T13:20:41.972 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 66/98 2026-03-20T13:20:41.975 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 67/98 2026-03-20T13:20:41.977 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 68/98 2026-03-20T13:20:41.983 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 69/98 2026-03-20T13:20:41.986 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 70/98 2026-03-20T13:20:41.990 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 71/98 2026-03-20T13:20:41.997 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 72/98 2026-03-20T13:20:42.002 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 73/98 2026-03-20T13:20:42.005 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 74/98 2026-03-20T13:20:42.008 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 75/98 2026-03-20T13:20:42.009 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.el9 76/98 2026-03-20T13:20:42.011 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el9. 77/98 2026-03-20T13:20:42.043 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 78/98 2026-03-20T13:20:42.043 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-20T13:20:42.043 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-20T13:20:42.043 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:42.050 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 78/98 2026-03-20T13:20:42.050 INFO:teuthology.orchestra.run.vm00.stdout:warning: file /etc/logrotate.d/ceph: remove failed: No such file or directory 2026-03-20T13:20:42.050 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T13:20:42.075 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 78/98 2026-03-20T13:20:42.075 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 79/98 2026-03-20T13:20:42.400 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-20T13:20:42.401 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm00.local 2026-03-20T13:20:42.401 DEBUG:teuthology.orchestra.run.vm00:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-20T13:20:42.423 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm06.local 2026-03-20T13:20:42.423 DEBUG:teuthology.orchestra.run.vm06:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-20T13:20:42.466 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm09.local 2026-03-20T13:20:42.466 DEBUG:teuthology.orchestra.run.vm09:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-20T13:20:42.488 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-20T13:20:42.488 DEBUG:teuthology.orchestra.run.vm00:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-20T13:20:42.490 DEBUG:teuthology.orchestra.run.vm06:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-20T13:20:42.507 DEBUG:teuthology.orchestra.run.vm09:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-20T13:20:42.521 INFO:teuthology.orchestra.run.vm06.stderr:bash: line 1: /home/ubuntu/cephtest/archive/syslog/journalctl.log: No space left on device 2026-03-20T13:20:42.894 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T13:20:42.894 ERROR:teuthology.run_tasks:Manager failed: internal.syslog Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/task/internal/syslog.py", line 76, in syslog yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/teuthology/teuthology/task/internal/syslog.py", line 163, in syslog run.wait( File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm06 with status 1: 'sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log' 2026-03-20T13:20:42.895 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-20T13:20:42.928 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-20T13:20:42.928 DEBUG:teuthology.orchestra.run.vm00:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-20T13:20:42.957 DEBUG:teuthology.orchestra.run.vm06:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-20T13:20:42.982 DEBUG:teuthology.orchestra.run.vm09:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-20T13:20:43.008 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-20T13:20:43.012 DEBUG:teuthology.orchestra.run.vm00:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-20T13:20:43.014 DEBUG:teuthology.orchestra.run.vm06:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-20T13:20:43.025 DEBUG:teuthology.orchestra.run.vm09:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-20T13:20:43.040 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = core 2026-03-20T13:20:43.048 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern = core 2026-03-20T13:20:43.076 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = core 2026-03-20T13:20:43.093 DEBUG:teuthology.orchestra.run.vm00:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-20T13:20:43.113 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T13:20:43.114 DEBUG:teuthology.orchestra.run.vm06:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-20T13:20:43.128 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T13:20:43.128 DEBUG:teuthology.orchestra.run.vm09:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-20T13:20:43.148 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T13:20:43.148 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-20T13:20:43.151 INFO:teuthology.task.internal:Transferring archived files... 2026-03-20T13:20:43.151 DEBUG:teuthology.misc:Transferring archived files from vm00:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-20_12:32:34-rgw-tentacle-none-default-vps/2137/remote/vm00 2026-03-20T13:20:43.151 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-20T13:20:43.324 DEBUG:teuthology.misc:Transferring archived files from vm06:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-20_12:32:34-rgw-tentacle-none-default-vps/2137/remote/vm06 2026-03-20T13:20:43.324 DEBUG:teuthology.orchestra.run.vm06:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-20T13:20:43.350 DEBUG:teuthology.misc:Transferring archived files from vm09:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-20_12:32:34-rgw-tentacle-none-default-vps/2137/remote/vm09 2026-03-20T13:20:43.350 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-20T13:20:43.491 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 79/98 2026-03-20T13:20:43.499 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 80/98 2026-03-20T13:20:43.500 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x86 81/98 2026-03-20T13:20:43.500 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 82/98 2026-03-20T13:20:43.500 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 83/98 2026-03-20T13:20:43.529 INFO:teuthology.task.internal:Removing archive directory... 2026-03-20T13:20:43.529 DEBUG:teuthology.orchestra.run.vm00:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-20T13:20:43.531 DEBUG:teuthology.orchestra.run.vm06:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-20T13:20:43.533 DEBUG:teuthology.orchestra.run.vm09:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-20T13:20:43.588 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-20T13:20:43.591 INFO:teuthology.task.internal:Not uploading archives. 2026-03-20T13:20:43.591 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-20T13:20:43.594 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-20T13:20:43.594 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-20T13:20:43.596 DEBUG:teuthology.orchestra.run.vm06:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-20T13:20:43.598 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-20T13:20:43.611 INFO:teuthology.orchestra.run.vm00.stdout: 8532144 0 drwxr-xr-x 3 ubuntu ubuntu 23 Mar 20 13:20 /home/ubuntu/cephtest 2026-03-20T13:20:43.611 INFO:teuthology.orchestra.run.vm00.stdout: 12585551 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 20 12:40 /home/ubuntu/cephtest/ceph.data 2026-03-20T13:20:43.612 INFO:teuthology.orchestra.run.vm00.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-20T13:20:43.616 INFO:teuthology.orchestra.run.vm06.stdout: 8532146 0 drwxr-xr-x 3 ubuntu ubuntu 76 Mar 20 13:20 /home/ubuntu/cephtest 2026-03-20T13:20:43.616 INFO:teuthology.orchestra.run.vm06.stdout: 12585487 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 20 12:40 /home/ubuntu/cephtest/ceph.data 2026-03-20T13:20:43.616 INFO:teuthology.orchestra.run.vm06.stdout: 8532150 4 -rw-r--r-- 1 ceph root 20 Mar 20 12:41 /home/ubuntu/cephtest/url_file 2026-03-20T13:20:43.616 INFO:teuthology.orchestra.run.vm06.stdout: 8532151 0 srwxr-xr-x 1 root root 0 Mar 20 12:41 /home/ubuntu/cephtest/rgw.opslog.ceph.client.1.sock 2026-03-20T13:20:43.616 INFO:teuthology.orchestra.run.vm06.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-20T13:20:43.630 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T13:20:43.630 ERROR:teuthology.run_tasks:Manager failed: internal.base Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 48, in base yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_200ab49823532903ca9be3870ca957b2093ed400/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 53, in base run.wait( File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 2026-03-20T13:20:43.630 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-20T13:20:43.633 DEBUG:teuthology.run_tasks:Exception was not quenched, exiting: CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' 2026-03-20T13:20:43.634 INFO:teuthology.run:Summary data: description: rgw/dedup/{beast bluestore-bitmap fixed-3-rgw ignore-pg-availability overrides supported-distros/{centos_latest} tasks/{0-install test_dedup}} duration: 2631.582152366638 failure_reason: 'Command failed on vm00 with status 1: ''adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph''' flavor: default owner: kyr sentry_event: null status: fail success: false 2026-03-20T13:20:43.634 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-20T13:20:43.647 INFO:teuthology.orchestra.run.vm09.stdout: 8532145 0 drwxr-xr-x 3 ubuntu ubuntu 95 Mar 20 13:20 /home/ubuntu/cephtest 2026-03-20T13:20:43.647 INFO:teuthology.orchestra.run.vm09.stdout: 12989196 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 20 12:40 /home/ubuntu/cephtest/ceph.data 2026-03-20T13:20:43.647 INFO:teuthology.orchestra.run.vm09.stdout: 8532149 4 -rw-r--r-- 1 ubuntu ubuntu 409 Mar 20 12:40 /home/ubuntu/cephtest/ceph.monmap 2026-03-20T13:20:43.647 INFO:teuthology.orchestra.run.vm09.stdout: 8532150 4 -rw-r--r-- 1 ceph root 20 Mar 20 12:41 /home/ubuntu/cephtest/url_file 2026-03-20T13:20:43.647 INFO:teuthology.orchestra.run.vm09.stdout: 8531706 0 srwxr-xr-x 1 root root 0 Mar 20 12:41 /home/ubuntu/cephtest/rgw.opslog.ceph.client.2.sock 2026-03-20T13:20:43.648 INFO:teuthology.orchestra.run.vm09.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-20T13:20:43.656 INFO:teuthology.run:FAIL