2026-03-20T18:22:15.527 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-20T18:22:15.532 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-20T18:22:15.552 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-20_18:10:20-rgw-tentacle-none-default-vps/2719 branch: tentacle description: rgw/dedup/{beast bluestore-bitmap fixed-3-rgw ignore-pg-availability overrides supported-distros/{centos_latest} tasks/{0-install test_dedup}} email: null first_in_suite: false flavor: default job_id: '2719' last_in_suite: false machine_type: vps name: kyr-2026-03-20_18:10:20-rgw-tentacle-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: centos os_version: 9.stream overrides: admin_socket: branch: tentacle ansible.cephlab: branch: main repo: https://github.com/kshtsk/ceph-cm-ansible.git skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: logical_volumes: lv_1: scratch_dev: true size: 25%VG vg: vg_nvme lv_2: scratch_dev: true size: 25%VG vg: vg_nvme lv_3: scratch_dev: true size: 25%VG vg: vg_nvme lv_4: scratch_dev: true size: 25%VG vg: vg_nvme timezone: UTC volume_groups: vg_nvme: pvs: /dev/vdb,/dev/vdc,/dev/vdd,/dev/vde ceph: conf: client: debug rgw: 20 debug rgw dedup: 20 setgroup: ceph setuser: ceph global: osd_max_pg_log_entries: 10 osd_min_pg_log_entries: 10 mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: bdev async discard: true bdev enable discard: true bluestore allocator: bitmap bluestore block size: 96636764160 bluestore fsck on mount: true debug bluefs: 1/20 debug bluestore: 1/20 debug ms: 1 debug osd: 20 debug rocksdb: 4/10 mon osd backfillfull_ratio: 0.85 mon osd full ratio: 0.9 mon osd nearfull ratio: 0.8 osd failsafe full ratio: 0.95 osd mclock iops capacity threshold hdd: 49000 osd objectstore: bluestore osd shutdown pgref assert: true flavor: default fs: xfs log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - \(PG_AVAILABILITY\) - \(PG_DEGRADED\) - \(POOL_APP_NOT_ENABLED\) - not have an application enabled sha1: 70f8415b300f041766fa27faf7d5472699e32388 ceph-deploy: bluestore: true conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} osd: bdev async discard: true bdev enable discard: true bluestore block size: 96636764160 bluestore fsck on mount: true debug bluefs: 1/20 debug bluestore: 1/20 debug rocksdb: 4/10 mon osd backfillfull_ratio: 0.85 mon osd full ratio: 0.9 mon osd nearfull ratio: 0.8 osd failsafe full ratio: 0.95 osd objectstore: bluestore fs: xfs cephadm: cephadm_binary_url: https://download.ceph.com/rpm-20.2.0/el9/noarch/cephadm install: ceph: flavor: default sha1: 70f8415b300f041766fa27faf7d5472699e32388 extra_system_packages: deb: - python3-jmespath - python3-xmltodict - s3cmd rpm: - bzip2 - perl-Test-Harness - python3-jmespath - python3-xmltodict - s3cmd rgw: frontend: beast storage classes: FROZEN: null LUKEWARM: null thrashosds: bdev_inject_crash: 2 bdev_inject_crash_probability: 0.5 workunit: branch: tt-tentacle sha1: 938e12e80b676435f28993327ab6082a0d57e922 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mon.c - mgr.y - osd.0 - osd.1 - osd.2 - osd.3 - client.0 - - mon.b - mgr.x - osd.4 - osd.5 - osd.6 - osd.7 - client.1 - - client.2 seed: 9676 sha1: 70f8415b300f041766fa27faf7d5472699e32388 sleep_before_teardown: 0 suite: rgw suite_branch: tt-tentacle suite_path: /home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 938e12e80b676435f28993327ab6082a0d57e922 targets: vm00.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHgRJrHOZyqTVAoIakGGfMNHQqM2D7IKMDlZ3KBkehSsuc30OZ+snHqbcDv3ViWEzoMxVJzcTlzwMF9LAAKreyU= vm02.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLHcJwQcYSeuAFCeT1rgGP6uxiInXVH0Tl0QotS7NIUfDkpdn09b9jmpmv1ADNotz13xr2oAJiPMtE4sPnXZeLo= vm05.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEWi21wlYfNkmZrMXDcXr9wyDZJ87iDLDe4kCHMZgVRj2Mx32g/A5kbCBNwUCFHtPO/dvch4xUKrN4mpzVZIKk0= tasks: - install: null - ceph: null - openssl_keys: null - rgw: - client.0 - client.1 - client.2 - tox: - client.0 - tox: - client.0 - dedup-tests: client.0: rgw_server: client.0 teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-20_18:10:20 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.4188345 2026-03-20T18:22:15.552 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa; will attempt to use it 2026-03-20T18:22:15.553 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks 2026-03-20T18:22:15.553 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-20T18:22:15.553 INFO:teuthology.task.internal:Checking packages... 2026-03-20T18:22:15.553 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash '70f8415b300f041766fa27faf7d5472699e32388' 2026-03-20T18:22:15.553 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-20T18:22:15.553 INFO:teuthology.packaging:ref: None 2026-03-20T18:22:15.553 INFO:teuthology.packaging:tag: None 2026-03-20T18:22:15.553 INFO:teuthology.packaging:branch: tentacle 2026-03-20T18:22:15.553 INFO:teuthology.packaging:sha1: 70f8415b300f041766fa27faf7d5472699e32388 2026-03-20T18:22:15.553 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=tentacle 2026-03-20T18:22:16.331 INFO:teuthology.task.internal:Found packages for ceph version 20.2.0-721.g5bb32787 2026-03-20T18:22:16.332 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-20T18:22:16.333 INFO:teuthology.task.internal:no buildpackages task found 2026-03-20T18:22:16.333 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-20T18:22:16.333 INFO:teuthology.task.internal:Saving configuration 2026-03-20T18:22:16.338 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-20T18:22:16.339 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-20T18:22:16.345 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm00.local', 'description': '/archive/kyr-2026-03-20_18:10:20-rgw-tentacle-none-default-vps/2719', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-20 18:20:24.182777', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:00', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHgRJrHOZyqTVAoIakGGfMNHQqM2D7IKMDlZ3KBkehSsuc30OZ+snHqbcDv3ViWEzoMxVJzcTlzwMF9LAAKreyU='} 2026-03-20T18:22:16.350 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm02.local', 'description': '/archive/kyr-2026-03-20_18:10:20-rgw-tentacle-none-default-vps/2719', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-20 18:20:24.183363', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:02', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLHcJwQcYSeuAFCeT1rgGP6uxiInXVH0Tl0QotS7NIUfDkpdn09b9jmpmv1ADNotz13xr2oAJiPMtE4sPnXZeLo='} 2026-03-20T18:22:16.355 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm05.local', 'description': '/archive/kyr-2026-03-20_18:10:20-rgw-tentacle-none-default-vps/2719', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-20 18:20:24.183680', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:05', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEWi21wlYfNkmZrMXDcXr9wyDZJ87iDLDe4kCHMZgVRj2Mx32g/A5kbCBNwUCFHtPO/dvch4xUKrN4mpzVZIKk0='} 2026-03-20T18:22:16.355 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-20T18:22:16.356 INFO:teuthology.task.internal:roles: ubuntu@vm00.local - ['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0'] 2026-03-20T18:22:16.356 INFO:teuthology.task.internal:roles: ubuntu@vm02.local - ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1'] 2026-03-20T18:22:16.356 INFO:teuthology.task.internal:roles: ubuntu@vm05.local - ['client.2'] 2026-03-20T18:22:16.356 INFO:teuthology.run_tasks:Running task console_log... 2026-03-20T18:22:16.362 DEBUG:teuthology.task.console_log:vm00 does not support IPMI; excluding 2026-03-20T18:22:16.366 DEBUG:teuthology.task.console_log:vm02 does not support IPMI; excluding 2026-03-20T18:22:16.371 DEBUG:teuthology.task.console_log:vm05 does not support IPMI; excluding 2026-03-20T18:22:16.371 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7fd2bd0f8790>, signals=[15]) 2026-03-20T18:22:16.371 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-20T18:22:16.372 INFO:teuthology.task.internal:Opening connections... 2026-03-20T18:22:16.372 DEBUG:teuthology.task.internal:connecting to ubuntu@vm00.local 2026-03-20T18:22:16.372 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-20T18:22:16.431 DEBUG:teuthology.task.internal:connecting to ubuntu@vm02.local 2026-03-20T18:22:16.432 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm02.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-20T18:22:16.489 DEBUG:teuthology.task.internal:connecting to ubuntu@vm05.local 2026-03-20T18:22:16.489 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-20T18:22:16.546 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-20T18:22:16.548 DEBUG:teuthology.orchestra.run.vm00:> uname -m 2026-03-20T18:22:16.564 INFO:teuthology.orchestra.run.vm00.stdout:x86_64 2026-03-20T18:22:16.564 DEBUG:teuthology.orchestra.run.vm00:> cat /etc/os-release 2026-03-20T18:22:16.620 INFO:teuthology.orchestra.run.vm00.stdout:NAME="CentOS Stream" 2026-03-20T18:22:16.620 INFO:teuthology.orchestra.run.vm00.stdout:VERSION="9" 2026-03-20T18:22:16.620 INFO:teuthology.orchestra.run.vm00.stdout:ID="centos" 2026-03-20T18:22:16.620 INFO:teuthology.orchestra.run.vm00.stdout:ID_LIKE="rhel fedora" 2026-03-20T18:22:16.620 INFO:teuthology.orchestra.run.vm00.stdout:VERSION_ID="9" 2026-03-20T18:22:16.620 INFO:teuthology.orchestra.run.vm00.stdout:PLATFORM_ID="platform:el9" 2026-03-20T18:22:16.620 INFO:teuthology.orchestra.run.vm00.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-20T18:22:16.620 INFO:teuthology.orchestra.run.vm00.stdout:ANSI_COLOR="0;31" 2026-03-20T18:22:16.620 INFO:teuthology.orchestra.run.vm00.stdout:LOGO="fedora-logo-icon" 2026-03-20T18:22:16.620 INFO:teuthology.orchestra.run.vm00.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-20T18:22:16.620 INFO:teuthology.orchestra.run.vm00.stdout:HOME_URL="https://centos.org/" 2026-03-20T18:22:16.620 INFO:teuthology.orchestra.run.vm00.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-20T18:22:16.620 INFO:teuthology.orchestra.run.vm00.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-20T18:22:16.620 INFO:teuthology.orchestra.run.vm00.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-20T18:22:16.620 INFO:teuthology.lock.ops:Updating vm00.local on lock server 2026-03-20T18:22:16.625 DEBUG:teuthology.orchestra.run.vm02:> uname -m 2026-03-20T18:22:16.644 INFO:teuthology.orchestra.run.vm02.stdout:x86_64 2026-03-20T18:22:16.644 DEBUG:teuthology.orchestra.run.vm02:> cat /etc/os-release 2026-03-20T18:22:16.701 INFO:teuthology.orchestra.run.vm02.stdout:NAME="CentOS Stream" 2026-03-20T18:22:16.701 INFO:teuthology.orchestra.run.vm02.stdout:VERSION="9" 2026-03-20T18:22:16.701 INFO:teuthology.orchestra.run.vm02.stdout:ID="centos" 2026-03-20T18:22:16.701 INFO:teuthology.orchestra.run.vm02.stdout:ID_LIKE="rhel fedora" 2026-03-20T18:22:16.701 INFO:teuthology.orchestra.run.vm02.stdout:VERSION_ID="9" 2026-03-20T18:22:16.701 INFO:teuthology.orchestra.run.vm02.stdout:PLATFORM_ID="platform:el9" 2026-03-20T18:22:16.701 INFO:teuthology.orchestra.run.vm02.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-20T18:22:16.701 INFO:teuthology.orchestra.run.vm02.stdout:ANSI_COLOR="0;31" 2026-03-20T18:22:16.701 INFO:teuthology.orchestra.run.vm02.stdout:LOGO="fedora-logo-icon" 2026-03-20T18:22:16.701 INFO:teuthology.orchestra.run.vm02.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-20T18:22:16.701 INFO:teuthology.orchestra.run.vm02.stdout:HOME_URL="https://centos.org/" 2026-03-20T18:22:16.701 INFO:teuthology.orchestra.run.vm02.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-20T18:22:16.701 INFO:teuthology.orchestra.run.vm02.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-20T18:22:16.701 INFO:teuthology.orchestra.run.vm02.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-20T18:22:16.701 INFO:teuthology.lock.ops:Updating vm02.local on lock server 2026-03-20T18:22:16.705 DEBUG:teuthology.orchestra.run.vm05:> uname -m 2026-03-20T18:22:16.724 INFO:teuthology.orchestra.run.vm05.stdout:x86_64 2026-03-20T18:22:16.724 DEBUG:teuthology.orchestra.run.vm05:> cat /etc/os-release 2026-03-20T18:22:16.782 INFO:teuthology.orchestra.run.vm05.stdout:NAME="CentOS Stream" 2026-03-20T18:22:16.782 INFO:teuthology.orchestra.run.vm05.stdout:VERSION="9" 2026-03-20T18:22:16.782 INFO:teuthology.orchestra.run.vm05.stdout:ID="centos" 2026-03-20T18:22:16.782 INFO:teuthology.orchestra.run.vm05.stdout:ID_LIKE="rhel fedora" 2026-03-20T18:22:16.782 INFO:teuthology.orchestra.run.vm05.stdout:VERSION_ID="9" 2026-03-20T18:22:16.782 INFO:teuthology.orchestra.run.vm05.stdout:PLATFORM_ID="platform:el9" 2026-03-20T18:22:16.782 INFO:teuthology.orchestra.run.vm05.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-20T18:22:16.782 INFO:teuthology.orchestra.run.vm05.stdout:ANSI_COLOR="0;31" 2026-03-20T18:22:16.782 INFO:teuthology.orchestra.run.vm05.stdout:LOGO="fedora-logo-icon" 2026-03-20T18:22:16.782 INFO:teuthology.orchestra.run.vm05.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-20T18:22:16.782 INFO:teuthology.orchestra.run.vm05.stdout:HOME_URL="https://centos.org/" 2026-03-20T18:22:16.782 INFO:teuthology.orchestra.run.vm05.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-20T18:22:16.782 INFO:teuthology.orchestra.run.vm05.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-20T18:22:16.782 INFO:teuthology.orchestra.run.vm05.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-20T18:22:16.782 INFO:teuthology.lock.ops:Updating vm05.local on lock server 2026-03-20T18:22:16.787 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-20T18:22:16.789 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-20T18:22:16.790 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-20T18:22:16.790 DEBUG:teuthology.orchestra.run.vm00:> test '!' -e /home/ubuntu/cephtest 2026-03-20T18:22:16.792 DEBUG:teuthology.orchestra.run.vm02:> test '!' -e /home/ubuntu/cephtest 2026-03-20T18:22:16.794 DEBUG:teuthology.orchestra.run.vm05:> test '!' -e /home/ubuntu/cephtest 2026-03-20T18:22:16.839 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-20T18:22:16.840 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-20T18:22:16.840 DEBUG:teuthology.orchestra.run.vm00:> test -z $(ls -A /var/lib/ceph) 2026-03-20T18:22:16.847 DEBUG:teuthology.orchestra.run.vm02:> test -z $(ls -A /var/lib/ceph) 2026-03-20T18:22:16.848 DEBUG:teuthology.orchestra.run.vm05:> test -z $(ls -A /var/lib/ceph) 2026-03-20T18:22:16.860 INFO:teuthology.orchestra.run.vm00.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-20T18:22:16.862 INFO:teuthology.orchestra.run.vm02.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-20T18:22:16.896 INFO:teuthology.orchestra.run.vm05.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-20T18:22:16.897 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-20T18:22:16.905 DEBUG:teuthology.orchestra.run.vm00:> test -e /ceph-qa-ready 2026-03-20T18:22:16.919 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T18:22:17.123 DEBUG:teuthology.orchestra.run.vm02:> test -e /ceph-qa-ready 2026-03-20T18:22:17.139 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T18:22:17.334 DEBUG:teuthology.orchestra.run.vm05:> test -e /ceph-qa-ready 2026-03-20T18:22:17.350 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T18:22:17.552 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-20T18:22:17.553 INFO:teuthology.task.internal:Creating test directory... 2026-03-20T18:22:17.553 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-20T18:22:17.555 DEBUG:teuthology.orchestra.run.vm02:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-20T18:22:17.556 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-20T18:22:17.572 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-20T18:22:17.573 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-20T18:22:17.574 INFO:teuthology.task.internal:Creating archive directory... 2026-03-20T18:22:17.575 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-20T18:22:17.610 DEBUG:teuthology.orchestra.run.vm02:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-20T18:22:17.614 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-20T18:22:17.632 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-20T18:22:17.633 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-20T18:22:17.633 DEBUG:teuthology.orchestra.run.vm00:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-20T18:22:17.681 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T18:22:17.682 DEBUG:teuthology.orchestra.run.vm02:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-20T18:22:17.696 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T18:22:17.696 DEBUG:teuthology.orchestra.run.vm05:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-20T18:22:17.712 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T18:22:17.712 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-20T18:22:17.724 DEBUG:teuthology.orchestra.run.vm02:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-20T18:22:17.737 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-20T18:22:17.746 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-20T18:22:17.757 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-20T18:22:17.761 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-20T18:22:17.771 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-20T18:22:17.776 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-20T18:22:17.785 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-20T18:22:17.786 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-20T18:22:17.788 INFO:teuthology.task.internal:Configuring sudo... 2026-03-20T18:22:17.788 DEBUG:teuthology.orchestra.run.vm00:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-20T18:22:17.801 DEBUG:teuthology.orchestra.run.vm02:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-20T18:22:17.815 DEBUG:teuthology.orchestra.run.vm05:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-20T18:22:17.853 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-20T18:22:17.855 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-20T18:22:17.855 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-20T18:22:17.865 DEBUG:teuthology.orchestra.run.vm02:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-20T18:22:17.880 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-20T18:22:17.910 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-20T18:22:17.944 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-20T18:22:18.001 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:22:18.001 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-20T18:22:18.062 DEBUG:teuthology.orchestra.run.vm02:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-20T18:22:18.085 DEBUG:teuthology.orchestra.run.vm02:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-20T18:22:18.143 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-20T18:22:18.143 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-20T18:22:18.202 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-20T18:22:18.226 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-20T18:22:18.285 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-20T18:22:18.285 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-20T18:22:18.345 DEBUG:teuthology.orchestra.run.vm00:> sudo service rsyslog restart 2026-03-20T18:22:18.346 DEBUG:teuthology.orchestra.run.vm02:> sudo service rsyslog restart 2026-03-20T18:22:18.349 DEBUG:teuthology.orchestra.run.vm05:> sudo service rsyslog restart 2026-03-20T18:22:18.373 INFO:teuthology.orchestra.run.vm00.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-20T18:22:18.375 INFO:teuthology.orchestra.run.vm02.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-20T18:22:18.414 INFO:teuthology.orchestra.run.vm05.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-20T18:22:18.872 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-20T18:22:18.874 INFO:teuthology.task.internal:Starting timer... 2026-03-20T18:22:18.874 INFO:teuthology.run_tasks:Running task pcp... 2026-03-20T18:22:18.877 INFO:teuthology.run_tasks:Running task selinux... 2026-03-20T18:22:18.879 INFO:teuthology.task.selinux:Excluding vm00: VMs are not yet supported 2026-03-20T18:22:18.879 INFO:teuthology.task.selinux:Excluding vm02: VMs are not yet supported 2026-03-20T18:22:18.879 INFO:teuthology.task.selinux:Excluding vm05: VMs are not yet supported 2026-03-20T18:22:18.879 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-20T18:22:18.879 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-20T18:22:18.879 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-20T18:22:18.879 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-20T18:22:18.880 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'repo': 'https://github.com/kshtsk/ceph-cm-ansible.git', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'logical_volumes': {'lv_1': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_2': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_3': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_4': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}}, 'timezone': 'UTC', 'volume_groups': {'vg_nvme': {'pvs': '/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde'}}}} 2026-03-20T18:22:18.881 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/kshtsk/ceph-cm-ansible.git 2026-03-20T18:22:18.882 INFO:teuthology.repo_utils:Fetching github.com_kshtsk_ceph-cm-ansible_main from origin 2026-03-20T18:22:19.390 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_kshtsk_ceph-cm-ansible_main to origin/main 2026-03-20T18:22:19.396 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-20T18:22:19.396 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "logical_volumes": {"lv_1": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_2": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_3": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_4": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}}, "timezone": "UTC", "volume_groups": {"vg_nvme": {"pvs": "/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde"}}}' -i /tmp/teuth_ansible_inventoryaovuah75 --limit vm00.local,vm02.local,vm05.local /home/teuthos/src/github.com_kshtsk_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-20T18:24:22.812 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm00.local'), Remote(name='ubuntu@vm02.local'), Remote(name='ubuntu@vm05.local')] 2026-03-20T18:24:22.813 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm00.local' 2026-03-20T18:24:22.813 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-20T18:24:22.879 DEBUG:teuthology.orchestra.run.vm00:> true 2026-03-20T18:24:22.967 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm00.local' 2026-03-20T18:24:22.967 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm02.local' 2026-03-20T18:24:22.967 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm02.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-20T18:24:23.032 DEBUG:teuthology.orchestra.run.vm02:> true 2026-03-20T18:24:23.129 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm02.local' 2026-03-20T18:24:23.129 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm05.local' 2026-03-20T18:24:23.130 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-20T18:24:23.193 DEBUG:teuthology.orchestra.run.vm05:> true 2026-03-20T18:24:23.271 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm05.local' 2026-03-20T18:24:23.271 INFO:teuthology.run_tasks:Running task clock... 2026-03-20T18:24:23.273 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-20T18:24:23.273 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-20T18:24:23.273 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-20T18:24:23.275 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-20T18:24:23.275 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-20T18:24:23.276 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-20T18:24:23.277 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-20T18:24:23.310 INFO:teuthology.orchestra.run.vm00.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-20T18:24:23.311 INFO:teuthology.orchestra.run.vm02.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-20T18:24:23.327 INFO:teuthology.orchestra.run.vm00.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-20T18:24:23.329 INFO:teuthology.orchestra.run.vm02.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-20T18:24:23.346 INFO:teuthology.orchestra.run.vm05.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-20T18:24:23.355 INFO:teuthology.orchestra.run.vm02.stderr:sudo: ntpd: command not found 2026-03-20T18:24:23.359 INFO:teuthology.orchestra.run.vm00.stderr:sudo: ntpd: command not found 2026-03-20T18:24:23.362 INFO:teuthology.orchestra.run.vm05.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-20T18:24:23.367 INFO:teuthology.orchestra.run.vm02.stdout:506 Cannot talk to daemon 2026-03-20T18:24:23.383 INFO:teuthology.orchestra.run.vm00.stdout:506 Cannot talk to daemon 2026-03-20T18:24:23.388 INFO:teuthology.orchestra.run.vm02.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-20T18:24:23.393 INFO:teuthology.orchestra.run.vm05.stderr:sudo: ntpd: command not found 2026-03-20T18:24:23.405 INFO:teuthology.orchestra.run.vm02.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-20T18:24:23.407 INFO:teuthology.orchestra.run.vm00.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-20T18:24:23.407 INFO:teuthology.orchestra.run.vm05.stdout:506 Cannot talk to daemon 2026-03-20T18:24:23.421 INFO:teuthology.orchestra.run.vm00.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-20T18:24:23.424 INFO:teuthology.orchestra.run.vm05.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-20T18:24:23.441 INFO:teuthology.orchestra.run.vm05.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-20T18:24:23.463 INFO:teuthology.orchestra.run.vm02.stderr:bash: line 1: ntpq: command not found 2026-03-20T18:24:23.465 INFO:teuthology.orchestra.run.vm02.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-20T18:24:23.465 INFO:teuthology.orchestra.run.vm02.stdout:=============================================================================== 2026-03-20T18:24:23.477 INFO:teuthology.orchestra.run.vm00.stderr:bash: line 1: ntpq: command not found 2026-03-20T18:24:23.481 INFO:teuthology.orchestra.run.vm00.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-20T18:24:23.481 INFO:teuthology.orchestra.run.vm00.stdout:=============================================================================== 2026-03-20T18:24:23.494 INFO:teuthology.orchestra.run.vm05.stderr:bash: line 1: ntpq: command not found 2026-03-20T18:24:23.497 INFO:teuthology.orchestra.run.vm05.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-20T18:24:23.497 INFO:teuthology.orchestra.run.vm05.stdout:=============================================================================== 2026-03-20T18:24:23.498 INFO:teuthology.run_tasks:Running task install... 2026-03-20T18:24:23.501 DEBUG:teuthology.task.install:project ceph 2026-03-20T18:24:23.501 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': '70f8415b300f041766fa27faf7d5472699e32388'}, 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}} 2026-03-20T18:24:23.501 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': '70f8415b300f041766fa27faf7d5472699e32388', 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}} 2026-03-20T18:24:23.501 INFO:teuthology.task.install:Using flavor: default 2026-03-20T18:24:23.503 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-20T18:24:23.503 INFO:teuthology.task.install:extra packages: [] 2026-03-20T18:24:23.504 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': '70f8415b300f041766fa27faf7d5472699e32388', 'tag': None, 'wait_for_package': False} 2026-03-20T18:24:23.504 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-20T18:24:23.505 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': '70f8415b300f041766fa27faf7d5472699e32388', 'tag': None, 'wait_for_package': False} 2026-03-20T18:24:23.505 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-20T18:24:23.505 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': '70f8415b300f041766fa27faf7d5472699e32388', 'tag': None, 'wait_for_package': False} 2026-03-20T18:24:23.505 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-20T18:24:24.166 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/ 2026-03-20T18:24:24.166 INFO:teuthology.task.install.rpm:Package version is 20.2.0-712.g70f8415b 2026-03-20T18:24:24.214 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/ 2026-03-20T18:24:24.214 INFO:teuthology.task.install.rpm:Package version is 20.2.0-712.g70f8415b 2026-03-20T18:24:24.248 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/ 2026-03-20T18:24:24.248 INFO:teuthology.task.install.rpm:Package version is 20.2.0-712.g70f8415b 2026-03-20T18:24:24.690 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-20T18:24:24.690 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:24:24.690 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-20T18:24:24.729 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-20T18:24:24.729 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-20T18:24:24.729 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-20T18:24:24.730 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-jmespath, python3-xmltodict, s3cmd on remote rpm x86_64 2026-03-20T18:24:24.730 DEBUG:teuthology.orchestra.run.vm00:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/70f8415b300f041766fa27faf7d5472699e32388/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-20T18:24:24.741 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-20T18:24:24.741 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-20T18:24:24.741 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-20T18:24:24.765 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-jmespath, python3-xmltodict, s3cmd on remote rpm x86_64 2026-03-20T18:24:24.765 DEBUG:teuthology.orchestra.run.vm05:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/70f8415b300f041766fa27faf7d5472699e32388/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-20T18:24:24.773 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-jmespath, python3-xmltodict, s3cmd on remote rpm x86_64 2026-03-20T18:24:24.773 DEBUG:teuthology.orchestra.run.vm02:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/70f8415b300f041766fa27faf7d5472699e32388/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-20T18:24:24.807 DEBUG:teuthology.orchestra.run.vm00:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-20T18:24:24.836 DEBUG:teuthology.orchestra.run.vm05:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-20T18:24:24.843 DEBUG:teuthology.orchestra.run.vm02:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-20T18:24:24.896 DEBUG:teuthology.orchestra.run.vm00:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-20T18:24:24.921 DEBUG:teuthology.orchestra.run.vm05:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-20T18:24:24.923 INFO:teuthology.orchestra.run.vm00.stdout:check_obsoletes = 1 2026-03-20T18:24:24.924 DEBUG:teuthology.orchestra.run.vm00:> sudo yum clean all 2026-03-20T18:24:24.927 DEBUG:teuthology.orchestra.run.vm02:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-20T18:24:24.948 INFO:teuthology.orchestra.run.vm05.stdout:check_obsoletes = 1 2026-03-20T18:24:24.949 DEBUG:teuthology.orchestra.run.vm05:> sudo yum clean all 2026-03-20T18:24:24.957 INFO:teuthology.orchestra.run.vm02.stdout:check_obsoletes = 1 2026-03-20T18:24:24.959 DEBUG:teuthology.orchestra.run.vm02:> sudo yum clean all 2026-03-20T18:24:25.119 INFO:teuthology.orchestra.run.vm00.stdout:41 files removed 2026-03-20T18:24:25.150 INFO:teuthology.orchestra.run.vm05.stdout:41 files removed 2026-03-20T18:24:25.152 DEBUG:teuthology.orchestra.run.vm00:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-jmespath python3-xmltodict s3cmd 2026-03-20T18:24:25.161 INFO:teuthology.orchestra.run.vm02.stdout:41 files removed 2026-03-20T18:24:25.187 DEBUG:teuthology.orchestra.run.vm05:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-jmespath python3-xmltodict s3cmd 2026-03-20T18:24:25.208 DEBUG:teuthology.orchestra.run.vm02:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-jmespath python3-xmltodict s3cmd 2026-03-20T18:24:26.547 INFO:teuthology.orchestra.run.vm02.stdout:ceph packages for x86_64 76 kB/s | 87 kB 00:01 2026-03-20T18:24:26.552 INFO:teuthology.orchestra.run.vm05.stdout:ceph packages for x86_64 75 kB/s | 87 kB 00:01 2026-03-20T18:24:26.578 INFO:teuthology.orchestra.run.vm00.stdout:ceph packages for x86_64 71 kB/s | 87 kB 00:01 2026-03-20T18:24:27.617 INFO:teuthology.orchestra.run.vm02.stdout:ceph noarch packages 17 kB/s | 18 kB 00:01 2026-03-20T18:24:27.634 INFO:teuthology.orchestra.run.vm00.stdout:ceph noarch packages 17 kB/s | 18 kB 00:01 2026-03-20T18:24:27.663 INFO:teuthology.orchestra.run.vm05.stdout:ceph noarch packages 17 kB/s | 18 kB 00:01 2026-03-20T18:24:28.576 INFO:teuthology.orchestra.run.vm02.stdout:ceph source packages 2.1 kB/s | 1.9 kB 00:00 2026-03-20T18:24:28.586 INFO:teuthology.orchestra.run.vm00.stdout:ceph source packages 2.1 kB/s | 1.9 kB 00:00 2026-03-20T18:24:28.620 INFO:teuthology.orchestra.run.vm05.stdout:ceph source packages 2.1 kB/s | 1.9 kB 00:00 2026-03-20T18:24:29.827 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - BaseOS 7.5 MB/s | 8.9 MB 00:01 2026-03-20T18:24:29.927 INFO:teuthology.orchestra.run.vm02.stdout:CentOS Stream 9 - BaseOS 6.7 MB/s | 8.9 MB 00:01 2026-03-20T18:24:31.705 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - AppStream 23 MB/s | 27 MB 00:01 2026-03-20T18:24:32.919 INFO:teuthology.orchestra.run.vm02.stdout:CentOS Stream 9 - AppStream 12 MB/s | 27 MB 00:02 2026-03-20T18:24:33.297 INFO:teuthology.orchestra.run.vm00.stdout:CentOS Stream 9 - BaseOS 1.9 MB/s | 8.9 MB 00:04 2026-03-20T18:24:35.646 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - CRB 7.5 MB/s | 8.0 MB 00:01 2026-03-20T18:24:36.242 INFO:teuthology.orchestra.run.vm00.stdout:CentOS Stream 9 - AppStream 12 MB/s | 27 MB 00:02 2026-03-20T18:24:36.889 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - Extras packages 58 kB/s | 20 kB 00:00 2026-03-20T18:24:37.136 INFO:teuthology.orchestra.run.vm02.stdout:CentOS Stream 9 - CRB 5.6 MB/s | 8.0 MB 00:01 2026-03-20T18:24:37.909 INFO:teuthology.orchestra.run.vm05.stdout:Extra Packages for Enterprise Linux 22 MB/s | 20 MB 00:00 2026-03-20T18:24:38.336 INFO:teuthology.orchestra.run.vm02.stdout:CentOS Stream 9 - Extras packages 70 kB/s | 20 kB 00:00 2026-03-20T18:24:39.187 INFO:teuthology.orchestra.run.vm02.stdout:Extra Packages for Enterprise Linux 27 MB/s | 20 MB 00:00 2026-03-20T18:24:40.339 INFO:teuthology.orchestra.run.vm00.stdout:CentOS Stream 9 - CRB 7.1 MB/s | 8.0 MB 00:01 2026-03-20T18:24:41.867 INFO:teuthology.orchestra.run.vm00.stdout:CentOS Stream 9 - Extras packages 37 kB/s | 20 kB 00:00 2026-03-20T18:24:42.332 INFO:teuthology.orchestra.run.vm00.stdout:Extra Packages for Enterprise Linux 53 MB/s | 20 MB 00:00 2026-03-20T18:24:42.659 INFO:teuthology.orchestra.run.vm05.stdout:lab-extras 64 kB/s | 50 kB 00:00 2026-03-20T18:24:44.113 INFO:teuthology.orchestra.run.vm05.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-20T18:24:44.113 INFO:teuthology.orchestra.run.vm05.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-20T18:24:44.149 INFO:teuthology.orchestra.run.vm02.stdout:lab-extras 64 kB/s | 50 kB 00:00 2026-03-20T18:24:44.149 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-20T18:24:44.154 INFO:teuthology.orchestra.run.vm05.stdout:====================================================================================== 2026-03-20T18:24:44.154 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-20T18:24:44.154 INFO:teuthology.orchestra.run.vm05.stdout:====================================================================================== 2026-03-20T18:24:44.154 INFO:teuthology.orchestra.run.vm05.stdout:Installing: 2026-03-20T18:24:44.154 INFO:teuthology.orchestra.run.vm05.stdout: bzip2 x86_64 1.0.8-11.el9 baseos 55 k 2026-03-20T18:24:44.154 INFO:teuthology.orchestra.run.vm05.stdout: ceph x86_64 2:20.2.0-712.g70f8415b.el9 ceph 6.5 k 2026-03-20T18:24:44.154 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base x86_64 2:20.2.0-712.g70f8415b.el9 ceph 5.9 M 2026-03-20T18:24:44.154 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 939 k 2026-03-20T18:24:44.154 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache x86_64 2:20.2.0-712.g70f8415b.el9 ceph 154 k 2026-03-20T18:24:44.154 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr x86_64 2:20.2.0-712.g70f8415b.el9 ceph 962 k 2026-03-20T18:24:44.154 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 173 k 2026-03-20T18:24:44.154 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 11 M 2026-03-20T18:24:44.154 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 7.4 M 2026-03-20T18:24:44.154 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 50 k 2026-03-20T18:24:44.154 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 M 2026-03-20T18:24:44.154 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test x86_64 2:20.2.0-712.g70f8415b.el9 ceph 84 M 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: ceph-volume noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 298 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: cephadm noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 1.0 M 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-devel x86_64 2:20.2.0-712.g70f8415b.el9 ceph 34 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 866 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: librados-devel x86_64 2:20.2.0-712.g70f8415b.el9 ceph 126 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: perl-Test-Harness noarch 1:3.42-461.el9 appstream 295 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs x86_64 2:20.2.0-712.g70f8415b.el9 ceph 163 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: python3-rados x86_64 2:20.2.0-712.g70f8415b.el9 ceph 324 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 304 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw x86_64 2:20.2.0-712.g70f8415b.el9 ceph 99 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 91 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.9 M 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: rbd-nbd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 180 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: s3cmd noarch 2.4.0-1.el9 epel 206 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout:Upgrading: 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: librados2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 3.5 M 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: librbd1 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.8 M 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout:Installing dependencies: 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 M 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 43 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.3 M 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 290 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon x86_64 2:20.2.0-712.g70f8415b.el9 ceph 5.0 M 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: ceph-osd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 17 M 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 17 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux x86_64 2:20.2.0-712.g70f8415b.el9 ceph 25 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: fuse x86_64 2.9.9-17.el9 baseos 80 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-proxy2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite x86_64 2:20.2.0-712.g70f8415b.el9 ceph 164 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 250 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-20T18:24:44.155 INFO:teuthology.orchestra.run.vm05.stdout: librgw2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 6.4 M 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: perl-Benchmark noarch 1.23-483.el9 appstream 26 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 45 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 ceph 175 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-20T18:24:44.156 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout:Installing weak dependencies: 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout:====================================================================================== 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout:Install 136 Packages 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout:Upgrade 2 Packages 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout:Total download size: 267 M 2026-03-20T18:24:44.157 INFO:teuthology.orchestra.run.vm05.stdout:Downloading Packages: 2026-03-20T18:24:45.597 INFO:teuthology.orchestra.run.vm02.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-20T18:24:45.598 INFO:teuthology.orchestra.run.vm02.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-20T18:24:45.633 INFO:teuthology.orchestra.run.vm02.stdout:Dependencies resolved. 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout:====================================================================================== 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: Package Arch Version Repository Size 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout:====================================================================================== 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout:Installing: 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: bzip2 x86_64 1.0.8-11.el9 baseos 55 k 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: ceph x86_64 2:20.2.0-712.g70f8415b.el9 ceph 6.5 k 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: ceph-base x86_64 2:20.2.0-712.g70f8415b.el9 ceph 5.9 M 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: ceph-fuse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 939 k 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: ceph-immutable-object-cache x86_64 2:20.2.0-712.g70f8415b.el9 ceph 154 k 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr x86_64 2:20.2.0-712.g70f8415b.el9 ceph 962 k 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-cephadm noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 173 k 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-dashboard noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 11 M 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-diskprediction-local noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 7.4 M 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-rook noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 50 k 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: ceph-radosgw x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 M 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: ceph-test x86_64 2:20.2.0-712.g70f8415b.el9 ceph 84 M 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: ceph-volume noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 298 k 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: cephadm noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 1.0 M 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs-devel x86_64 2:20.2.0-712.g70f8415b.el9 ceph 34 k 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 866 k 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: librados-devel x86_64 2:20.2.0-712.g70f8415b.el9 ceph 126 k 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: perl-Test-Harness noarch 1:3.42-461.el9 appstream 295 k 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: python3-cephfs x86_64 2:20.2.0-712.g70f8415b.el9 ceph 163 k 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: python3-rados x86_64 2:20.2.0-712.g70f8415b.el9 ceph 324 k 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: python3-rbd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 304 k 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: python3-rgw x86_64 2:20.2.0-712.g70f8415b.el9 ceph 99 k 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-20T18:24:45.638 INFO:teuthology.orchestra.run.vm02.stdout: rbd-fuse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 91 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: rbd-mirror x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.9 M 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: rbd-nbd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 180 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: s3cmd noarch 2.4.0-1.el9 epel 206 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout:Upgrading: 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: librados2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 3.5 M 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: librbd1 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.8 M 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout:Installing dependencies: 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 M 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: ceph-grafana-dashboards noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 43 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mds x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.3 M 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 290 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mon x86_64 2:20.2.0-712.g70f8415b.el9 ceph 5.0 M 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: ceph-osd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 17 M 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: ceph-prometheus-alerts noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 17 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: ceph-selinux x86_64 2:20.2.0-712.g70f8415b.el9 ceph 25 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: fuse x86_64 2.9.9-17.el9 baseos 80 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs-proxy2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: libcephsqlite x86_64 2:20.2.0-712.g70f8415b.el9 ceph 164 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 250 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: librgw2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 6.4 M 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: perl-Benchmark noarch 1.23-483.el9 appstream 26 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-20T18:24:45.639 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 45 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 ceph 175 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-20T18:24:45.640 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-20T18:24:45.641 INFO:teuthology.orchestra.run.vm02.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-20T18:24:45.641 INFO:teuthology.orchestra.run.vm02.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-20T18:24:45.641 INFO:teuthology.orchestra.run.vm02.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-20T18:24:45.641 INFO:teuthology.orchestra.run.vm02.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-20T18:24:45.641 INFO:teuthology.orchestra.run.vm02.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-20T18:24:45.641 INFO:teuthology.orchestra.run.vm02.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-20T18:24:45.641 INFO:teuthology.orchestra.run.vm02.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-20T18:24:45.641 INFO:teuthology.orchestra.run.vm02.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-20T18:24:45.641 INFO:teuthology.orchestra.run.vm02.stdout:Installing weak dependencies: 2026-03-20T18:24:45.641 INFO:teuthology.orchestra.run.vm02.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-20T18:24:45.641 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:24:45.641 INFO:teuthology.orchestra.run.vm02.stdout:Transaction Summary 2026-03-20T18:24:45.641 INFO:teuthology.orchestra.run.vm02.stdout:====================================================================================== 2026-03-20T18:24:45.641 INFO:teuthology.orchestra.run.vm02.stdout:Install 136 Packages 2026-03-20T18:24:45.641 INFO:teuthology.orchestra.run.vm02.stdout:Upgrade 2 Packages 2026-03-20T18:24:45.641 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:24:45.641 INFO:teuthology.orchestra.run.vm02.stdout:Total download size: 267 M 2026-03-20T18:24:45.641 INFO:teuthology.orchestra.run.vm02.stdout:Downloading Packages: 2026-03-20T18:24:45.867 INFO:teuthology.orchestra.run.vm05.stdout:(1/138): ceph-20.2.0-712.g70f8415b.el9.x86_64.r 13 kB/s | 6.5 kB 00:00 2026-03-20T18:24:46.676 INFO:teuthology.orchestra.run.vm05.stdout:(2/138): ceph-fuse-20.2.0-712.g70f8415b.el9.x86 1.1 MB/s | 939 kB 00:00 2026-03-20T18:24:46.793 INFO:teuthology.orchestra.run.vm05.stdout:(3/138): ceph-immutable-object-cache-20.2.0-712 1.3 MB/s | 154 kB 00:00 2026-03-20T18:24:46.923 INFO:teuthology.orchestra.run.vm02.stdout:(1/138): ceph-20.2.0-712.g70f8415b.el9.x86_64.r 15 kB/s | 6.5 kB 00:00 2026-03-20T18:24:47.046 INFO:teuthology.orchestra.run.vm05.stdout:(4/138): ceph-mds-20.2.0-712.g70f8415b.el9.x86_ 9.3 MB/s | 2.3 MB 00:00 2026-03-20T18:24:47.112 INFO:teuthology.orchestra.run.vm05.stdout:(5/138): ceph-base-20.2.0-712.g70f8415b.el9.x86 3.4 MB/s | 5.9 MB 00:01 2026-03-20T18:24:47.182 INFO:teuthology.orchestra.run.vm05.stdout:(6/138): ceph-mgr-20.2.0-712.g70f8415b.el9.x86_ 6.9 MB/s | 962 kB 00:00 2026-03-20T18:24:47.183 INFO:teuthology.orchestra.run.vm00.stdout:lab-extras 60 kB/s | 50 kB 00:00 2026-03-20T18:24:47.718 INFO:teuthology.orchestra.run.vm02.stdout:(2/138): ceph-fuse-20.2.0-712.g70f8415b.el9.x86 1.2 MB/s | 939 kB 00:00 2026-03-20T18:24:47.725 INFO:teuthology.orchestra.run.vm05.stdout:(7/138): ceph-mon-20.2.0-712.g70f8415b.el9.x86_ 8.2 MB/s | 5.0 MB 00:00 2026-03-20T18:24:47.830 INFO:teuthology.orchestra.run.vm02.stdout:(3/138): ceph-immutable-object-cache-20.2.0-712 1.3 MB/s | 154 kB 00:00 2026-03-20T18:24:48.015 INFO:teuthology.orchestra.run.vm05.stdout:(8/138): ceph-common-20.2.0-712.g70f8415b.el9.x 9.0 MB/s | 24 MB 00:02 2026-03-20T18:24:48.082 INFO:teuthology.orchestra.run.vm02.stdout:(4/138): ceph-mds-20.2.0-712.g70f8415b.el9.x86_ 9.3 MB/s | 2.3 MB 00:00 2026-03-20T18:24:48.133 INFO:teuthology.orchestra.run.vm05.stdout:(9/138): ceph-selinux-20.2.0-712.g70f8415b.el9. 213 kB/s | 25 kB 00:00 2026-03-20T18:24:48.134 INFO:teuthology.orchestra.run.vm02.stdout:(5/138): ceph-base-20.2.0-712.g70f8415b.el9.x86 3.6 MB/s | 5.9 MB 00:01 2026-03-20T18:24:48.206 INFO:teuthology.orchestra.run.vm02.stdout:(6/138): ceph-mgr-20.2.0-712.g70f8415b.el9.x86_ 7.6 MB/s | 962 kB 00:00 2026-03-20T18:24:48.611 INFO:teuthology.orchestra.run.vm00.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-20T18:24:48.611 INFO:teuthology.orchestra.run.vm00.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-20T18:24:48.643 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout:====================================================================================== 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout:====================================================================================== 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout:Installing: 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: bzip2 x86_64 1.0.8-11.el9 baseos 55 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: ceph x86_64 2:20.2.0-712.g70f8415b.el9 ceph 6.5 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base x86_64 2:20.2.0-712.g70f8415b.el9 ceph 5.9 M 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: ceph-fuse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 939 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: ceph-immutable-object-cache x86_64 2:20.2.0-712.g70f8415b.el9 ceph 154 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr x86_64 2:20.2.0-712.g70f8415b.el9 ceph 962 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 173 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 11 M 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-diskprediction-local noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 7.4 M 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-rook noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 50 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: ceph-radosgw x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 M 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: ceph-test x86_64 2:20.2.0-712.g70f8415b.el9 ceph 84 M 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: ceph-volume noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 298 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: cephadm noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 1.0 M 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-devel x86_64 2:20.2.0-712.g70f8415b.el9 ceph 34 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 866 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: librados-devel x86_64 2:20.2.0-712.g70f8415b.el9 ceph 126 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: perl-Test-Harness noarch 1:3.42-461.el9 appstream 295 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: python3-cephfs x86_64 2:20.2.0-712.g70f8415b.el9 ceph 163 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: python3-rados x86_64 2:20.2.0-712.g70f8415b.el9 ceph 324 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: python3-rbd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 304 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: python3-rgw x86_64 2:20.2.0-712.g70f8415b.el9 ceph 99 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: rbd-fuse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 91 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: rbd-mirror x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.9 M 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: rbd-nbd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 180 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout: s3cmd noarch 2.4.0-1.el9 epel 206 k 2026-03-20T18:24:48.648 INFO:teuthology.orchestra.run.vm00.stdout:Upgrading: 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: librados2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 3.5 M 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: librbd1 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.8 M 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout:Installing dependencies: 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 M 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: ceph-grafana-dashboards noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 43 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.3 M 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 290 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon x86_64 2:20.2.0-712.g70f8415b.el9 ceph 5.0 M 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: ceph-osd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 17 M 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: ceph-prometheus-alerts noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 17 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: ceph-selinux x86_64 2:20.2.0-712.g70f8415b.el9 ceph 25 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: fuse x86_64 2.9.9-17.el9 baseos 80 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-proxy2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: libcephsqlite x86_64 2:20.2.0-712.g70f8415b.el9 ceph 164 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 250 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: librgw2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 6.4 M 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: perl-Benchmark noarch 1.23-483.el9 appstream 26 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-20T18:24:48.649 INFO:teuthology.orchestra.run.vm00.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-argparse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 45 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 ceph 175 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-20T18:24:48.650 INFO:teuthology.orchestra.run.vm00.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-20T18:24:48.651 INFO:teuthology.orchestra.run.vm00.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-20T18:24:48.651 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-20T18:24:48.651 INFO:teuthology.orchestra.run.vm00.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-20T18:24:48.651 INFO:teuthology.orchestra.run.vm00.stdout:Installing weak dependencies: 2026-03-20T18:24:48.651 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-20T18:24:48.651 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:24:48.651 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-20T18:24:48.651 INFO:teuthology.orchestra.run.vm00.stdout:====================================================================================== 2026-03-20T18:24:48.651 INFO:teuthology.orchestra.run.vm00.stdout:Install 136 Packages 2026-03-20T18:24:48.651 INFO:teuthology.orchestra.run.vm00.stdout:Upgrade 2 Packages 2026-03-20T18:24:48.651 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:24:48.651 INFO:teuthology.orchestra.run.vm00.stdout:Total download size: 267 M 2026-03-20T18:24:48.651 INFO:teuthology.orchestra.run.vm00.stdout:Downloading Packages: 2026-03-20T18:24:48.812 INFO:teuthology.orchestra.run.vm05.stdout:(10/138): ceph-osd-20.2.0-712.g70f8415b.el9.x86 10 MB/s | 17 MB 00:01 2026-03-20T18:24:48.928 INFO:teuthology.orchestra.run.vm05.stdout:(11/138): libcephfs-devel-20.2.0-712.g70f8415b. 296 kB/s | 34 kB 00:00 2026-03-20T18:24:48.963 INFO:teuthology.orchestra.run.vm02.stdout:(7/138): ceph-mon-20.2.0-712.g70f8415b.el9.x86_ 6.1 MB/s | 5.0 MB 00:00 2026-03-20T18:24:49.045 INFO:teuthology.orchestra.run.vm05.stdout:(12/138): libcephfs-proxy2-20.2.0-712.g70f8415b 208 kB/s | 24 kB 00:00 2026-03-20T18:24:49.177 INFO:teuthology.orchestra.run.vm05.stdout:(13/138): libcephfs2-20.2.0-712.g70f8415b.el9.x 6.4 MB/s | 866 kB 00:00 2026-03-20T18:24:49.299 INFO:teuthology.orchestra.run.vm05.stdout:(14/138): libcephsqlite-20.2.0-712.g70f8415b.el 1.3 MB/s | 164 kB 00:00 2026-03-20T18:24:49.419 INFO:teuthology.orchestra.run.vm05.stdout:(15/138): librados-devel-20.2.0-712.g70f8415b.e 1.0 MB/s | 126 kB 00:00 2026-03-20T18:24:49.471 INFO:teuthology.orchestra.run.vm02.stdout:(8/138): ceph-common-20.2.0-712.g70f8415b.el9.x 8.0 MB/s | 24 MB 00:02 2026-03-20T18:24:49.539 INFO:teuthology.orchestra.run.vm05.stdout:(16/138): libradosstriper1-20.2.0-712.g70f8415b 2.0 MB/s | 250 kB 00:00 2026-03-20T18:24:49.591 INFO:teuthology.orchestra.run.vm02.stdout:(9/138): ceph-selinux-20.2.0-712.g70f8415b.el9. 209 kB/s | 25 kB 00:00 2026-03-20T18:24:50.137 INFO:teuthology.orchestra.run.vm05.stdout:(17/138): librgw2-20.2.0-712.g70f8415b.el9.x86_ 11 MB/s | 6.4 MB 00:00 2026-03-20T18:24:50.254 INFO:teuthology.orchestra.run.vm05.stdout:(18/138): python3-ceph-argparse-20.2.0-712.g70f 385 kB/s | 45 kB 00:00 2026-03-20T18:24:50.323 INFO:teuthology.orchestra.run.vm00.stdout:(1/138): ceph-20.2.0-712.g70f8415b.el9.x86_64.r 14 kB/s | 6.5 kB 00:00 2026-03-20T18:24:50.383 INFO:teuthology.orchestra.run.vm05.stdout:(19/138): python3-ceph-common-20.2.0-712.g70f84 1.3 MB/s | 175 kB 00:00 2026-03-20T18:24:50.503 INFO:teuthology.orchestra.run.vm05.stdout:(20/138): python3-cephfs-20.2.0-712.g70f8415b.e 1.3 MB/s | 163 kB 00:00 2026-03-20T18:24:50.625 INFO:teuthology.orchestra.run.vm05.stdout:(21/138): python3-rados-20.2.0-712.g70f8415b.el 2.6 MB/s | 324 kB 00:00 2026-03-20T18:24:50.746 INFO:teuthology.orchestra.run.vm05.stdout:(22/138): python3-rbd-20.2.0-712.g70f8415b.el9. 2.5 MB/s | 304 kB 00:00 2026-03-20T18:24:50.865 INFO:teuthology.orchestra.run.vm05.stdout:(23/138): python3-rgw-20.2.0-712.g70f8415b.el9. 837 kB/s | 99 kB 00:00 2026-03-20T18:24:50.983 INFO:teuthology.orchestra.run.vm05.stdout:(24/138): rbd-fuse-20.2.0-712.g70f8415b.el9.x86 775 kB/s | 91 kB 00:00 2026-03-20T18:24:51.158 INFO:teuthology.orchestra.run.vm00.stdout:(2/138): ceph-fuse-20.2.0-712.g70f8415b.el9.x86 1.1 MB/s | 939 kB 00:00 2026-03-20T18:24:51.284 INFO:teuthology.orchestra.run.vm05.stdout:(25/138): rbd-mirror-20.2.0-712.g70f8415b.el9.x 9.7 MB/s | 2.9 MB 00:00 2026-03-20T18:24:51.312 INFO:teuthology.orchestra.run.vm00.stdout:(3/138): ceph-immutable-object-cache-20.2.0-712 1.0 MB/s | 154 kB 00:00 2026-03-20T18:24:51.415 INFO:teuthology.orchestra.run.vm05.stdout:(26/138): rbd-nbd-20.2.0-712.g70f8415b.el9.x86_ 1.3 MB/s | 180 kB 00:00 2026-03-20T18:24:51.565 INFO:teuthology.orchestra.run.vm05.stdout:(27/138): ceph-grafana-dashboards-20.2.0-712.g7 291 kB/s | 43 kB 00:00 2026-03-20T18:24:51.586 INFO:teuthology.orchestra.run.vm00.stdout:(4/138): ceph-base-20.2.0-712.g70f8415b.el9.x86 3.4 MB/s | 5.9 MB 00:01 2026-03-20T18:24:51.615 INFO:teuthology.orchestra.run.vm02.stdout:(10/138): ceph-osd-20.2.0-712.g70f8415b.el9.x86 5.0 MB/s | 17 MB 00:03 2026-03-20T18:24:51.711 INFO:teuthology.orchestra.run.vm05.stdout:(28/138): ceph-mgr-cephadm-20.2.0-712.g70f8415b 1.2 MB/s | 173 kB 00:00 2026-03-20T18:24:51.733 INFO:teuthology.orchestra.run.vm00.stdout:(5/138): ceph-mds-20.2.0-712.g70f8415b.el9.x86_ 5.6 MB/s | 2.3 MB 00:00 2026-03-20T18:24:51.738 INFO:teuthology.orchestra.run.vm00.stdout:(6/138): ceph-mgr-20.2.0-712.g70f8415b.el9.x86_ 6.2 MB/s | 962 kB 00:00 2026-03-20T18:24:51.740 INFO:teuthology.orchestra.run.vm02.stdout:(11/138): libcephfs-devel-20.2.0-712.g70f8415b. 276 kB/s | 34 kB 00:00 2026-03-20T18:24:51.874 INFO:teuthology.orchestra.run.vm02.stdout:(12/138): libcephfs-proxy2-20.2.0-712.g70f8415b 180 kB/s | 24 kB 00:00 2026-03-20T18:24:52.213 INFO:teuthology.orchestra.run.vm02.stdout:(13/138): libcephfs2-20.2.0-712.g70f8415b.el9.x 2.5 MB/s | 866 kB 00:00 2026-03-20T18:24:52.327 INFO:teuthology.orchestra.run.vm02.stdout:(14/138): libcephsqlite-20.2.0-712.g70f8415b.el 1.4 MB/s | 164 kB 00:00 2026-03-20T18:24:52.441 INFO:teuthology.orchestra.run.vm02.stdout:(15/138): librados-devel-20.2.0-712.g70f8415b.e 1.1 MB/s | 126 kB 00:00 2026-03-20T18:24:52.453 INFO:teuthology.orchestra.run.vm00.stdout:(7/138): ceph-mon-20.2.0-712.g70f8415b.el9.x86_ 7.0 MB/s | 5.0 MB 00:00 2026-03-20T18:24:52.560 INFO:teuthology.orchestra.run.vm02.stdout:(16/138): libradosstriper1-20.2.0-712.g70f8415b 2.1 MB/s | 250 kB 00:00 2026-03-20T18:24:52.677 INFO:teuthology.orchestra.run.vm05.stdout:(29/138): ceph-mgr-dashboard-20.2.0-712.g70f841 11 MB/s | 11 MB 00:00 2026-03-20T18:24:52.817 INFO:teuthology.orchestra.run.vm05.stdout:(30/138): ceph-radosgw-20.2.0-712.g70f8415b.el9 4.6 MB/s | 24 MB 00:05 2026-03-20T18:24:52.936 INFO:teuthology.orchestra.run.vm05.stdout:(31/138): ceph-mgr-modules-core-20.2.0-712.g70f 2.4 MB/s | 290 kB 00:00 2026-03-20T18:24:53.062 INFO:teuthology.orchestra.run.vm05.stdout:(32/138): ceph-mgr-rook-20.2.0-712.g70f8415b.el 399 kB/s | 50 kB 00:00 2026-03-20T18:24:53.063 INFO:teuthology.orchestra.run.vm00.stdout:(8/138): ceph-common-20.2.0-712.g70f8415b.el9.x 7.4 MB/s | 24 MB 00:03 2026-03-20T18:24:53.172 INFO:teuthology.orchestra.run.vm00.stdout:(9/138): ceph-selinux-20.2.0-712.g70f8415b.el9. 230 kB/s | 25 kB 00:00 2026-03-20T18:24:53.176 INFO:teuthology.orchestra.run.vm05.stdout:(33/138): ceph-prometheus-alerts-20.2.0-712.g70 152 kB/s | 17 kB 00:00 2026-03-20T18:24:53.297 INFO:teuthology.orchestra.run.vm05.stdout:(34/138): ceph-volume-20.2.0-712.g70f8415b.el9. 2.4 MB/s | 298 kB 00:00 2026-03-20T18:24:53.447 INFO:teuthology.orchestra.run.vm05.stdout:(35/138): ceph-mgr-diskprediction-local-20.2.0- 9.6 MB/s | 7.4 MB 00:00 2026-03-20T18:24:53.644 INFO:teuthology.orchestra.run.vm05.stdout:(36/138): cephadm-20.2.0-712.g70f8415b.el9.noar 2.9 MB/s | 1.0 MB 00:00 2026-03-20T18:24:53.645 INFO:teuthology.orchestra.run.vm05.stdout:(37/138): bzip2-1.0.8-11.el9.x86_64.rpm 277 kB/s | 55 kB 00:00 2026-03-20T18:24:53.793 INFO:teuthology.orchestra.run.vm05.stdout:(38/138): cryptsetup-2.8.1-3.el9.x86_64.rpm 2.3 MB/s | 351 kB 00:00 2026-03-20T18:24:53.844 INFO:teuthology.orchestra.run.vm05.stdout:(39/138): ledmon-libs-1.1.0-3.el9.x86_64.rpm 810 kB/s | 40 kB 00:00 2026-03-20T18:24:53.857 INFO:teuthology.orchestra.run.vm05.stdout:(40/138): fuse-2.9.9-17.el9.x86_64.rpm 377 kB/s | 80 kB 00:00 2026-03-20T18:24:53.895 INFO:teuthology.orchestra.run.vm05.stdout:(41/138): libconfig-1.7.2-9.el9.x86_64.rpm 1.4 MB/s | 72 kB 00:00 2026-03-20T18:24:53.946 INFO:teuthology.orchestra.run.vm05.stdout:(42/138): libquadmath-11.5.0-14.el9.x86_64.rpm 3.5 MB/s | 184 kB 00:00 2026-03-20T18:24:53.997 INFO:teuthology.orchestra.run.vm05.stdout:(43/138): mailcap-2.1.49-5.el9.noarch.rpm 666 kB/s | 33 kB 00:00 2026-03-20T18:24:54.048 INFO:teuthology.orchestra.run.vm05.stdout:(44/138): pciutils-3.7.0-7.el9.x86_64.rpm 1.8 MB/s | 93 kB 00:00 2026-03-20T18:24:54.074 INFO:teuthology.orchestra.run.vm05.stdout:(45/138): libgfortran-11.5.0-14.el9.x86_64.rpm 3.6 MB/s | 794 kB 00:00 2026-03-20T18:24:54.100 INFO:teuthology.orchestra.run.vm05.stdout:(46/138): python3-cffi-1.14.5-5.el9.x86_64.rpm 4.7 MB/s | 253 kB 00:00 2026-03-20T18:24:54.152 INFO:teuthology.orchestra.run.vm05.stdout:(47/138): python3-ply-3.11-14.el9.noarch.rpm 2.0 MB/s | 106 kB 00:00 2026-03-20T18:24:54.168 INFO:teuthology.orchestra.run.vm02.stdout:(17/138): librgw2-20.2.0-712.g70f8415b.el9.x86_ 4.0 MB/s | 6.4 MB 00:01 2026-03-20T18:24:54.203 INFO:teuthology.orchestra.run.vm05.stdout:(48/138): python3-pycparser-2.20-6.el9.noarch.r 2.6 MB/s | 135 kB 00:00 2026-03-20T18:24:54.236 INFO:teuthology.orchestra.run.vm05.stdout:(49/138): python3-cryptography-36.0.1-5.el9.x86 7.7 MB/s | 1.2 MB 00:00 2026-03-20T18:24:54.254 INFO:teuthology.orchestra.run.vm05.stdout:(50/138): python3-pyparsing-2.4.7-9.el9.noarch. 2.9 MB/s | 150 kB 00:00 2026-03-20T18:24:54.283 INFO:teuthology.orchestra.run.vm00.stdout:(10/138): ceph-osd-20.2.0-712.g70f8415b.el9.x86 6.7 MB/s | 17 MB 00:02 2026-03-20T18:24:54.283 INFO:teuthology.orchestra.run.vm02.stdout:(18/138): python3-ceph-argparse-20.2.0-712.g70f 393 kB/s | 45 kB 00:00 2026-03-20T18:24:54.291 INFO:teuthology.orchestra.run.vm05.stdout:(51/138): python3-requests-2.25.1-10.el9.noarch 2.2 MB/s | 126 kB 00:00 2026-03-20T18:24:54.307 INFO:teuthology.orchestra.run.vm05.stdout:(52/138): python3-urllib3-1.26.5-7.el9.noarch.r 4.1 MB/s | 218 kB 00:00 2026-03-20T18:24:54.347 INFO:teuthology.orchestra.run.vm05.stdout:(53/138): unzip-6.0-59.el9.x86_64.rpm 3.2 MB/s | 182 kB 00:00 2026-03-20T18:24:54.359 INFO:teuthology.orchestra.run.vm05.stdout:(54/138): zip-3.0-35.el9.x86_64.rpm 5.0 MB/s | 266 kB 00:00 2026-03-20T18:24:54.414 INFO:teuthology.orchestra.run.vm02.stdout:(19/138): python3-ceph-common-20.2.0-712.g70f84 1.3 MB/s | 175 kB 00:00 2026-03-20T18:24:54.420 INFO:teuthology.orchestra.run.vm00.stdout:(11/138): libcephfs-devel-20.2.0-712.g70f8415b. 250 kB/s | 34 kB 00:00 2026-03-20T18:24:54.527 INFO:teuthology.orchestra.run.vm02.stdout:(20/138): python3-cephfs-20.2.0-712.g70f8415b.e 1.4 MB/s | 163 kB 00:00 2026-03-20T18:24:54.540 INFO:teuthology.orchestra.run.vm00.stdout:(12/138): libcephfs-proxy2-20.2.0-712.g70f8415b 202 kB/s | 24 kB 00:00 2026-03-20T18:24:54.551 INFO:teuthology.orchestra.run.vm05.stdout:(55/138): flexiblas-3.0.4-9.el9.x86_64.rpm 155 kB/s | 30 kB 00:00 2026-03-20T18:24:54.578 INFO:teuthology.orchestra.run.vm05.stdout:(56/138): boost-program-options-1.75.0-13.el9.x 451 kB/s | 104 kB 00:00 2026-03-20T18:24:54.643 INFO:teuthology.orchestra.run.vm02.stdout:(21/138): python3-rados-20.2.0-712.g70f8415b.el 2.7 MB/s | 324 kB 00:00 2026-03-20T18:24:54.710 INFO:teuthology.orchestra.run.vm00.stdout:(13/138): ceph-radosgw-20.2.0-712.g70f8415b.el9 10 MB/s | 24 MB 00:02 2026-03-20T18:24:54.741 INFO:teuthology.orchestra.run.vm05.stdout:(57/138): flexiblas-openblas-openmp-3.0.4-9.el9 91 kB/s | 15 kB 00:00 2026-03-20T18:24:54.758 INFO:teuthology.orchestra.run.vm02.stdout:(22/138): python3-rbd-20.2.0-712.g70f8415b.el9. 2.6 MB/s | 304 kB 00:00 2026-03-20T18:24:54.763 INFO:teuthology.orchestra.run.vm05.stdout:(58/138): flexiblas-netlib-3.0.4-9.el9.x86_64.r 14 MB/s | 3.0 MB 00:00 2026-03-20T18:24:54.787 INFO:teuthology.orchestra.run.vm05.stdout:(59/138): libnbd-1.20.3-4.el9.x86_64.rpm 3.6 MB/s | 164 kB 00:00 2026-03-20T18:24:54.793 INFO:teuthology.orchestra.run.vm00.stdout:(14/138): libcephfs2-20.2.0-712.g70f8415b.el9.x 3.3 MB/s | 866 kB 00:00 2026-03-20T18:24:54.812 INFO:teuthology.orchestra.run.vm05.stdout:(60/138): libpmemobj-1.12.1-1.el9.x86_64.rpm 3.2 MB/s | 160 kB 00:00 2026-03-20T18:24:54.830 INFO:teuthology.orchestra.run.vm00.stdout:(15/138): libcephsqlite-20.2.0-712.g70f8415b.el 1.3 MB/s | 164 kB 00:00 2026-03-20T18:24:54.844 INFO:teuthology.orchestra.run.vm05.stdout:(61/138): librabbitmq-0.11.0-7.el9.x86_64.rpm 798 kB/s | 45 kB 00:00 2026-03-20T18:24:54.870 INFO:teuthology.orchestra.run.vm02.stdout:(23/138): python3-rgw-20.2.0-712.g70f8415b.el9. 885 kB/s | 99 kB 00:00 2026-03-20T18:24:54.893 INFO:teuthology.orchestra.run.vm05.stdout:(62/138): librdkafka-1.6.1-102.el9.x86_64.rpm 8.1 MB/s | 662 kB 00:00 2026-03-20T18:24:54.913 INFO:teuthology.orchestra.run.vm05.stdout:(63/138): libstoragemgmt-1.10.1-1.el9.x86_64.rp 3.5 MB/s | 246 kB 00:00 2026-03-20T18:24:54.916 INFO:teuthology.orchestra.run.vm00.stdout:(16/138): librados-devel-20.2.0-712.g70f8415b.e 1.0 MB/s | 126 kB 00:00 2026-03-20T18:24:54.952 INFO:teuthology.orchestra.run.vm00.stdout:(17/138): libradosstriper1-20.2.0-712.g70f8415b 2.0 MB/s | 250 kB 00:00 2026-03-20T18:24:54.960 INFO:teuthology.orchestra.run.vm05.stdout:(64/138): libxslt-1.1.34-12.el9.x86_64.rpm 3.4 MB/s | 233 kB 00:00 2026-03-20T18:24:54.978 INFO:teuthology.orchestra.run.vm05.stdout:(65/138): lttng-ust-2.12.0-6.el9.x86_64.rpm 4.4 MB/s | 292 kB 00:00 2026-03-20T18:24:54.984 INFO:teuthology.orchestra.run.vm02.stdout:(24/138): rbd-fuse-20.2.0-712.g70f8415b.el9.x86 806 kB/s | 91 kB 00:00 2026-03-20T18:24:55.057 INFO:teuthology.orchestra.run.vm05.stdout:(66/138): lua-5.4.4-4.el9.x86_64.rpm 1.9 MB/s | 188 kB 00:00 2026-03-20T18:24:55.057 INFO:teuthology.orchestra.run.vm05.stdout:(67/138): openblas-0.3.29-1.el9.x86_64.rpm 534 kB/s | 42 kB 00:00 2026-03-20T18:24:55.074 INFO:teuthology.orchestra.run.vm00.stdout:(18/138): python3-ceph-argparse-20.2.0-712.g70f 370 kB/s | 45 kB 00:00 2026-03-20T18:24:55.115 INFO:teuthology.orchestra.run.vm05.stdout:(68/138): perl-Benchmark-1.23-483.el9.noarch.rp 463 kB/s | 26 kB 00:00 2026-03-20T18:24:55.195 INFO:teuthology.orchestra.run.vm00.stdout:(19/138): python3-ceph-common-20.2.0-712.g70f84 1.4 MB/s | 175 kB 00:00 2026-03-20T18:24:55.214 INFO:teuthology.orchestra.run.vm05.stdout:(69/138): perl-Test-Harness-3.42-461.el9.noarch 2.9 MB/s | 295 kB 00:00 2026-03-20T18:24:55.227 INFO:teuthology.orchestra.run.vm05.stdout:(70/138): openblas-openmp-0.3.29-1.el9.x86_64.r 31 MB/s | 5.3 MB 00:00 2026-03-20T18:24:55.316 INFO:teuthology.orchestra.run.vm05.stdout:(71/138): protobuf-3.14.0-17.el9.x86_64.rpm 9.9 MB/s | 1.0 MB 00:00 2026-03-20T18:24:55.316 INFO:teuthology.orchestra.run.vm02.stdout:(25/138): ceph-radosgw-20.2.0-712.g70f8415b.el9 3.7 MB/s | 24 MB 00:06 2026-03-20T18:24:55.316 INFO:teuthology.orchestra.run.vm00.stdout:(20/138): python3-cephfs-20.2.0-712.g70f8415b.e 1.3 MB/s | 163 kB 00:00 2026-03-20T18:24:55.435 INFO:teuthology.orchestra.run.vm02.stdout:(26/138): rbd-nbd-20.2.0-712.g70f8415b.el9.x86_ 1.5 MB/s | 180 kB 00:00 2026-03-20T18:24:55.442 INFO:teuthology.orchestra.run.vm00.stdout:(21/138): python3-rados-20.2.0-712.g70f8415b.el 2.5 MB/s | 324 kB 00:00 2026-03-20T18:24:55.454 INFO:teuthology.orchestra.run.vm05.stdout:(72/138): python3-babel-2.9.1-2.el9.noarch.rpm 26 MB/s | 6.0 MB 00:00 2026-03-20T18:24:55.456 INFO:teuthology.orchestra.run.vm05.stdout:(73/138): python3-devel-3.9.25-3.el9.x86_64.rpm 1.7 MB/s | 244 kB 00:00 2026-03-20T18:24:55.507 INFO:teuthology.orchestra.run.vm05.stdout:(74/138): python3-jmespath-1.0.1-1.el9.noarch.r 937 kB/s | 48 kB 00:00 2026-03-20T18:24:55.515 INFO:teuthology.orchestra.run.vm05.stdout:(75/138): python3-jinja2-2.11.3-8.el9.noarch.rp 4.0 MB/s | 249 kB 00:00 2026-03-20T18:24:55.551 INFO:teuthology.orchestra.run.vm02.stdout:(27/138): ceph-grafana-dashboards-20.2.0-712.g7 376 kB/s | 43 kB 00:00 2026-03-20T18:24:55.553 INFO:teuthology.orchestra.run.vm05.stdout:(76/138): python3-markupsafe-1.1.1-12.el9.x86_6 928 kB/s | 35 kB 00:00 2026-03-20T18:24:55.559 INFO:teuthology.orchestra.run.vm05.stdout:(77/138): python3-libstoragemgmt-1.10.1-1.el9.x 3.3 MB/s | 177 kB 00:00 2026-03-20T18:24:55.565 INFO:teuthology.orchestra.run.vm00.stdout:(22/138): python3-rbd-20.2.0-712.g70f8415b.el9. 2.4 MB/s | 304 kB 00:00 2026-03-20T18:24:55.623 INFO:teuthology.orchestra.run.vm05.stdout:(78/138): python3-numpy-f2py-1.23.5-2.el9.x86_6 6.8 MB/s | 442 kB 00:00 2026-03-20T18:24:55.685 INFO:teuthology.orchestra.run.vm00.stdout:(23/138): python3-rgw-20.2.0-712.g70f8415b.el9. 830 kB/s | 99 kB 00:00 2026-03-20T18:24:55.702 INFO:teuthology.orchestra.run.vm05.stdout:(79/138): python3-packaging-20.9-5.el9.noarch.r 987 kB/s | 77 kB 00:00 2026-03-20T18:24:55.727 INFO:teuthology.orchestra.run.vm05.stdout:(80/138): python3-numpy-1.23.5-2.el9.x86_64.rpm 35 MB/s | 6.1 MB 00:00 2026-03-20T18:24:55.753 INFO:teuthology.orchestra.run.vm05.stdout:(81/138): python3-protobuf-3.14.0-17.el9.noarch 5.1 MB/s | 267 kB 00:00 2026-03-20T18:24:55.775 INFO:teuthology.orchestra.run.vm05.stdout:(82/138): python3-pyasn1-0.4.8-7.el9.noarch.rpm 3.2 MB/s | 157 kB 00:00 2026-03-20T18:24:55.804 INFO:teuthology.orchestra.run.vm00.stdout:(24/138): rbd-fuse-20.2.0-712.g70f8415b.el9.x86 763 kB/s | 91 kB 00:00 2026-03-20T18:24:55.825 INFO:teuthology.orchestra.run.vm05.stdout:(83/138): python3-requests-oauthlib-1.3.0-12.el 1.1 MB/s | 54 kB 00:00 2026-03-20T18:24:55.826 INFO:teuthology.orchestra.run.vm05.stdout:(84/138): python3-pyasn1-modules-0.4.8-7.el9.no 3.7 MB/s | 277 kB 00:00 2026-03-20T18:24:55.866 INFO:teuthology.orchestra.run.vm02.stdout:(28/138): rbd-mirror-20.2.0-712.g70f8415b.el9.x 3.3 MB/s | 2.9 MB 00:00 2026-03-20T18:24:55.882 INFO:teuthology.orchestra.run.vm05.stdout:(85/138): python3-toml-0.10.2-6.el9.noarch.rpm 749 kB/s | 42 kB 00:00 2026-03-20T18:24:55.985 INFO:teuthology.orchestra.run.vm05.stdout:(86/138): qatlib-25.08.0-2.el9.x86_64.rpm 2.3 MB/s | 240 kB 00:00 2026-03-20T18:24:56.229 INFO:teuthology.orchestra.run.vm05.stdout:(87/138): qatlib-service-25.08.0-2.el9.x86_64.r 152 kB/s | 37 kB 00:00 2026-03-20T18:24:56.283 INFO:teuthology.orchestra.run.vm05.stdout:(88/138): python3-scipy-1.9.3-2.el9.x86_64.rpm 42 MB/s | 19 MB 00:00 2026-03-20T18:24:56.300 INFO:teuthology.orchestra.run.vm05.stdout:(89/138): qatzip-libs-1.3.1-1.el9.x86_64.rpm 944 kB/s | 66 kB 00:00 2026-03-20T18:24:56.336 INFO:teuthology.orchestra.run.vm05.stdout:(90/138): socat-1.7.4.1-8.el9.x86_64.rpm 5.7 MB/s | 303 kB 00:00 2026-03-20T18:24:56.342 INFO:teuthology.orchestra.run.vm00.stdout:(25/138): rbd-mirror-20.2.0-712.g70f8415b.el9.x 5.4 MB/s | 2.9 MB 00:00 2026-03-20T18:24:56.356 INFO:teuthology.orchestra.run.vm05.stdout:(91/138): xmlstarlet-1.6.1-20.el9.x86_64.rpm 1.1 MB/s | 64 kB 00:00 2026-03-20T18:24:56.463 INFO:teuthology.orchestra.run.vm00.stdout:(26/138): rbd-nbd-20.2.0-712.g70f8415b.el9.x86_ 1.5 MB/s | 180 kB 00:00 2026-03-20T18:24:56.581 INFO:teuthology.orchestra.run.vm00.stdout:(27/138): ceph-grafana-dashboards-20.2.0-712.g7 365 kB/s | 43 kB 00:00 2026-03-20T18:24:56.632 INFO:teuthology.orchestra.run.vm02.stdout:(29/138): ceph-mgr-cephadm-20.2.0-712.g70f8415b 160 kB/s | 173 kB 00:01 2026-03-20T18:24:56.638 INFO:teuthology.orchestra.run.vm05.stdout:(92/138): lua-devel-5.4.4-4.el9.x86_64.rpm 74 kB/s | 22 kB 00:00 2026-03-20T18:24:56.677 INFO:teuthology.orchestra.run.vm05.stdout:(93/138): abseil-cpp-20211102.0-4.el9.x86_64.rp 14 MB/s | 551 kB 00:00 2026-03-20T18:24:56.696 INFO:teuthology.orchestra.run.vm05.stdout:(94/138): gperftools-libs-2.9.1-3.el9.x86_64.rp 17 MB/s | 308 kB 00:00 2026-03-20T18:24:56.702 INFO:teuthology.orchestra.run.vm00.stdout:(28/138): ceph-mgr-cephadm-20.2.0-712.g70f8415b 1.4 MB/s | 173 kB 00:00 2026-03-20T18:24:56.702 INFO:teuthology.orchestra.run.vm05.stdout:(95/138): grpc-data-1.46.7-10.el9.noarch.rpm 3.4 MB/s | 19 kB 00:00 2026-03-20T18:24:56.802 INFO:teuthology.orchestra.run.vm05.stdout:(96/138): protobuf-compiler-3.14.0-17.el9.x86_6 1.9 MB/s | 862 kB 00:00 2026-03-20T18:24:56.818 INFO:teuthology.orchestra.run.vm05.stdout:(97/138): libarrow-9.0.0-15.el9.x86_64.rpm 38 MB/s | 4.4 MB 00:00 2026-03-20T18:24:56.818 INFO:teuthology.orchestra.run.vm05.stdout:(98/138): libarrow-doc-9.0.0-15.el9.noarch.rpm 1.5 MB/s | 25 kB 00:00 2026-03-20T18:24:56.836 INFO:teuthology.orchestra.run.vm05.stdout:(99/138): libunwind-1.6.2-1.el9.x86_64.rpm 3.8 MB/s | 67 kB 00:00 2026-03-20T18:24:56.838 INFO:teuthology.orchestra.run.vm05.stdout:(100/138): liboath-2.6.12-1.el9.x86_64.rpm 2.4 MB/s | 49 kB 00:00 2026-03-20T18:24:56.841 INFO:teuthology.orchestra.run.vm05.stdout:(101/138): luarocks-3.9.2-5.el9.noarch.rpm 31 MB/s | 151 kB 00:00 2026-03-20T18:24:56.857 INFO:teuthology.orchestra.run.vm05.stdout:(102/138): python3-asyncssh-2.13.2-5.el9.noarch 37 MB/s | 548 kB 00:00 2026-03-20T18:24:56.860 INFO:teuthology.orchestra.run.vm05.stdout:(103/138): parquet-libs-9.0.0-15.el9.x86_64.rpm 37 MB/s | 838 kB 00:00 2026-03-20T18:24:56.861 INFO:teuthology.orchestra.run.vm05.stdout:(104/138): python3-autocommand-2.2.2-8.el9.noar 6.3 MB/s | 29 kB 00:00 2026-03-20T18:24:56.865 INFO:teuthology.orchestra.run.vm05.stdout:(105/138): python3-backports-tarfile-1.2.0-1.el 14 MB/s | 60 kB 00:00 2026-03-20T18:24:56.865 INFO:teuthology.orchestra.run.vm05.stdout:(106/138): python3-bcrypt-3.2.2-1.el9.x86_64.rp 11 MB/s | 43 kB 00:00 2026-03-20T18:24:56.868 INFO:teuthology.orchestra.run.vm05.stdout:(107/138): python3-cachetools-4.2.4-1.el9.noarc 11 MB/s | 32 kB 00:00 2026-03-20T18:24:56.868 INFO:teuthology.orchestra.run.vm05.stdout:(108/138): python3-certifi-2023.05.07-4.el9.noa 4.9 MB/s | 14 kB 00:00 2026-03-20T18:24:56.873 INFO:teuthology.orchestra.run.vm05.stdout:(109/138): python3-cheroot-10.0.1-4.el9.noarch. 36 MB/s | 173 kB 00:00 2026-03-20T18:24:56.876 INFO:teuthology.orchestra.run.vm05.stdout:(110/138): python3-cherrypy-18.6.1-2.el9.noarch 47 MB/s | 358 kB 00:00 2026-03-20T18:24:56.879 INFO:teuthology.orchestra.run.vm05.stdout:(111/138): python3-google-auth-2.45.0-1.el9.noa 41 MB/s | 254 kB 00:00 2026-03-20T18:24:56.886 INFO:teuthology.orchestra.run.vm05.stdout:(112/138): python3-grpcio-tools-1.46.7-10.el9.x 21 MB/s | 144 kB 00:00 2026-03-20T18:24:56.891 INFO:teuthology.orchestra.run.vm05.stdout:(113/138): python3-jaraco-8.2.1-3.el9.noarch.rp 2.4 MB/s | 11 kB 00:00 2026-03-20T18:24:56.895 INFO:teuthology.orchestra.run.vm05.stdout:(114/138): python3-jaraco-classes-3.2.1-5.el9.n 4.2 MB/s | 18 kB 00:00 2026-03-20T18:24:56.906 INFO:teuthology.orchestra.run.vm05.stdout:(115/138): python3-grpcio-1.46.7-10.el9.x86_64. 70 MB/s | 2.0 MB 00:00 2026-03-20T18:24:56.907 INFO:teuthology.orchestra.run.vm05.stdout:(116/138): python3-jaraco-collections-3.0.0-8.e 2.1 MB/s | 23 kB 00:00 2026-03-20T18:24:56.908 INFO:teuthology.orchestra.run.vm05.stdout:(117/138): python3-jaraco-context-6.0.1-3.el9.n 8.6 MB/s | 20 kB 00:00 2026-03-20T18:24:56.909 INFO:teuthology.orchestra.run.vm05.stdout:(118/138): python3-jaraco-functools-3.5.0-2.el9 8.7 MB/s | 19 kB 00:00 2026-03-20T18:24:56.912 INFO:teuthology.orchestra.run.vm05.stdout:(119/138): python3-jaraco-text-4.0.0-2.el9.noar 7.7 MB/s | 26 kB 00:00 2026-03-20T18:24:56.917 INFO:teuthology.orchestra.run.vm05.stdout:(120/138): python3-more-itertools-8.12.0-2.el9. 17 MB/s | 79 kB 00:00 2026-03-20T18:24:56.921 INFO:teuthology.orchestra.run.vm05.stdout:(121/138): python3-natsort-7.1.1-5.el9.noarch.r 15 MB/s | 58 kB 00:00 2026-03-20T18:24:56.926 INFO:teuthology.orchestra.run.vm05.stdout:(122/138): python3-kubernetes-26.1.0-3.el9.noar 62 MB/s | 1.0 MB 00:00 2026-03-20T18:24:56.926 INFO:teuthology.orchestra.run.vm05.stdout:(123/138): python3-portend-3.1.0-2.el9.noarch.r 2.9 MB/s | 16 kB 00:00 2026-03-20T18:24:56.930 INFO:teuthology.orchestra.run.vm05.stdout:(124/138): python3-pyOpenSSL-21.0.0-1.el9.noarc 23 MB/s | 90 kB 00:00 2026-03-20T18:24:56.931 INFO:teuthology.orchestra.run.vm05.stdout:(125/138): python3-repoze-lru-0.7-16.el9.noarch 6.6 MB/s | 31 kB 00:00 2026-03-20T18:24:56.935 INFO:teuthology.orchestra.run.vm05.stdout:(126/138): python3-routes-2.5.1-5.el9.noarch.rp 36 MB/s | 188 kB 00:00 2026-03-20T18:24:56.936 INFO:teuthology.orchestra.run.vm05.stdout:(127/138): python3-rsa-4.9-2.el9.noarch.rpm 13 MB/s | 59 kB 00:00 2026-03-20T18:24:56.940 INFO:teuthology.orchestra.run.vm05.stdout:(128/138): python3-tempora-5.0.0-2.el9.noarch.r 7.5 MB/s | 36 kB 00:00 2026-03-20T18:24:56.942 INFO:teuthology.orchestra.run.vm05.stdout:(129/138): python3-typing-extensions-4.15.0-1.e 15 MB/s | 86 kB 00:00 2026-03-20T18:24:56.945 INFO:teuthology.orchestra.run.vm05.stdout:(130/138): python3-websocket-client-1.2.3-2.el9 18 MB/s | 90 kB 00:00 2026-03-20T18:24:56.946 INFO:teuthology.orchestra.run.vm05.stdout:(131/138): python3-xmltodict-0.12.0-15.el9.noar 5.1 MB/s | 22 kB 00:00 2026-03-20T18:24:56.949 INFO:teuthology.orchestra.run.vm05.stdout:(132/138): python3-zc-lockfile-2.0-10.el9.noarc 6.1 MB/s | 20 kB 00:00 2026-03-20T18:24:56.951 INFO:teuthology.orchestra.run.vm05.stdout:(133/138): re2-20211101-20.el9.x86_64.rpm 37 MB/s | 191 kB 00:00 2026-03-20T18:24:56.954 INFO:teuthology.orchestra.run.vm05.stdout:(134/138): s3cmd-2.4.0-1.el9.noarch.rpm 40 MB/s | 206 kB 00:00 2026-03-20T18:24:56.978 INFO:teuthology.orchestra.run.vm05.stdout:(135/138): thrift-0.15.0-4.el9.x86_64.rpm 60 MB/s | 1.6 MB 00:00 2026-03-20T18:24:58.011 INFO:teuthology.orchestra.run.vm05.stdout:(136/138): librados2-20.2.0-712.g70f8415b.el9.x 3.3 MB/s | 3.5 MB 00:01 2026-03-20T18:24:58.289 INFO:teuthology.orchestra.run.vm00.stdout:(29/138): librgw2-20.2.0-712.g70f8415b.el9.x86_ 1.9 MB/s | 6.4 MB 00:03 2026-03-20T18:24:58.359 INFO:teuthology.orchestra.run.vm05.stdout:(137/138): librbd1-20.2.0-712.g70f8415b.el9.x86 2.1 MB/s | 2.8 MB 00:01 2026-03-20T18:24:58.947 INFO:teuthology.orchestra.run.vm00.stdout:(30/138): ceph-mgr-dashboard-20.2.0-712.g70f841 4.7 MB/s | 11 MB 00:02 2026-03-20T18:24:59.068 INFO:teuthology.orchestra.run.vm00.stdout:(31/138): ceph-mgr-modules-core-20.2.0-712.g70f 2.3 MB/s | 290 kB 00:00 2026-03-20T18:24:59.186 INFO:teuthology.orchestra.run.vm00.stdout:(32/138): ceph-mgr-rook-20.2.0-712.g70f8415b.el 426 kB/s | 50 kB 00:00 2026-03-20T18:24:59.304 INFO:teuthology.orchestra.run.vm00.stdout:(33/138): ceph-prometheus-alerts-20.2.0-712.g70 148 kB/s | 17 kB 00:00 2026-03-20T18:24:59.376 INFO:teuthology.orchestra.run.vm02.stdout:(30/138): ceph-mgr-diskprediction-local-20.2.0- 2.7 MB/s | 7.4 MB 00:02 2026-03-20T18:24:59.424 INFO:teuthology.orchestra.run.vm00.stdout:(34/138): ceph-volume-20.2.0-712.g70f8415b.el9. 2.4 MB/s | 298 kB 00:00 2026-03-20T18:24:59.498 INFO:teuthology.orchestra.run.vm02.stdout:(31/138): ceph-mgr-modules-core-20.2.0-712.g70f 2.3 MB/s | 290 kB 00:00 2026-03-20T18:24:59.612 INFO:teuthology.orchestra.run.vm02.stdout:(32/138): ceph-mgr-rook-20.2.0-712.g70f8415b.el 439 kB/s | 50 kB 00:00 2026-03-20T18:24:59.665 INFO:teuthology.orchestra.run.vm00.stdout:(35/138): cephadm-20.2.0-712.g70f8415b.el9.noar 4.2 MB/s | 1.0 MB 00:00 2026-03-20T18:24:59.726 INFO:teuthology.orchestra.run.vm02.stdout:(33/138): ceph-prometheus-alerts-20.2.0-712.g70 152 kB/s | 17 kB 00:00 2026-03-20T18:24:59.844 INFO:teuthology.orchestra.run.vm02.stdout:(34/138): ceph-volume-20.2.0-712.g70f8415b.el9. 2.5 MB/s | 298 kB 00:00 2026-03-20T18:24:59.852 INFO:teuthology.orchestra.run.vm00.stdout:(36/138): bzip2-1.0.8-11.el9.x86_64.rpm 293 kB/s | 55 kB 00:00 2026-03-20T18:24:59.975 INFO:teuthology.orchestra.run.vm00.stdout:(37/138): cryptsetup-2.8.1-3.el9.x86_64.rpm 2.8 MB/s | 351 kB 00:00 2026-03-20T18:25:00.042 INFO:teuthology.orchestra.run.vm00.stdout:(38/138): fuse-2.9.9-17.el9.x86_64.rpm 1.2 MB/s | 80 kB 00:00 2026-03-20T18:25:00.104 INFO:teuthology.orchestra.run.vm00.stdout:(39/138): ledmon-libs-1.1.0-3.el9.x86_64.rpm 654 kB/s | 40 kB 00:00 2026-03-20T18:25:00.172 INFO:teuthology.orchestra.run.vm00.stdout:(40/138): libconfig-1.7.2-9.el9.x86_64.rpm 1.0 MB/s | 72 kB 00:00 2026-03-20T18:25:00.191 INFO:teuthology.orchestra.run.vm02.stdout:(35/138): cephadm-20.2.0-712.g70f8415b.el9.noar 2.9 MB/s | 1.0 MB 00:00 2026-03-20T18:25:00.270 INFO:teuthology.orchestra.run.vm00.stdout:(41/138): libgfortran-11.5.0-14.el9.x86_64.rpm 8.0 MB/s | 794 kB 00:00 2026-03-20T18:25:00.333 INFO:teuthology.orchestra.run.vm00.stdout:(42/138): libquadmath-11.5.0-14.el9.x86_64.rpm 2.8 MB/s | 184 kB 00:00 2026-03-20T18:25:00.383 INFO:teuthology.orchestra.run.vm02.stdout:(36/138): ceph-mgr-dashboard-20.2.0-712.g70f841 2.3 MB/s | 11 MB 00:04 2026-03-20T18:25:00.397 INFO:teuthology.orchestra.run.vm00.stdout:(43/138): mailcap-2.1.49-5.el9.noarch.rpm 525 kB/s | 33 kB 00:00 2026-03-20T18:25:00.404 INFO:teuthology.orchestra.run.vm02.stdout:(37/138): bzip2-1.0.8-11.el9.x86_64.rpm 256 kB/s | 55 kB 00:00 2026-03-20T18:25:00.460 INFO:teuthology.orchestra.run.vm00.stdout:(44/138): pciutils-3.7.0-7.el9.x86_64.rpm 1.4 MB/s | 93 kB 00:00 2026-03-20T18:25:00.515 INFO:teuthology.orchestra.run.vm02.stdout:(38/138): fuse-2.9.9-17.el9.x86_64.rpm 722 kB/s | 80 kB 00:00 2026-03-20T18:25:00.523 INFO:teuthology.orchestra.run.vm02.stdout:(39/138): cryptsetup-2.8.1-3.el9.x86_64.rpm 2.4 MB/s | 351 kB 00:00 2026-03-20T18:25:00.527 INFO:teuthology.orchestra.run.vm00.stdout:(45/138): python3-cffi-1.14.5-5.el9.x86_64.rpm 3.7 MB/s | 253 kB 00:00 2026-03-20T18:25:00.563 INFO:teuthology.orchestra.run.vm02.stdout:(40/138): ledmon-libs-1.1.0-3.el9.x86_64.rpm 861 kB/s | 40 kB 00:00 2026-03-20T18:25:00.568 INFO:teuthology.orchestra.run.vm02.stdout:(41/138): libconfig-1.7.2-9.el9.x86_64.rpm 1.6 MB/s | 72 kB 00:00 2026-03-20T18:25:00.620 INFO:teuthology.orchestra.run.vm00.stdout:(46/138): python3-cryptography-36.0.1-5.el9.x86 14 MB/s | 1.2 MB 00:00 2026-03-20T18:25:00.657 INFO:teuthology.orchestra.run.vm02.stdout:(42/138): libquadmath-11.5.0-14.el9.x86_64.rpm 2.0 MB/s | 184 kB 00:00 2026-03-20T18:25:00.661 INFO:teuthology.orchestra.run.vm02.stdout:(43/138): libgfortran-11.5.0-14.el9.x86_64.rpm 7.9 MB/s | 794 kB 00:00 2026-03-20T18:25:00.679 INFO:teuthology.orchestra.run.vm00.stdout:(47/138): python3-ply-3.11-14.el9.noarch.rpm 1.8 MB/s | 106 kB 00:00 2026-03-20T18:25:00.698 INFO:teuthology.orchestra.run.vm02.stdout:(44/138): mailcap-2.1.49-5.el9.noarch.rpm 830 kB/s | 33 kB 00:00 2026-03-20T18:25:00.706 INFO:teuthology.orchestra.run.vm02.stdout:(45/138): pciutils-3.7.0-7.el9.x86_64.rpm 2.0 MB/s | 93 kB 00:00 2026-03-20T18:25:00.740 INFO:teuthology.orchestra.run.vm00.stdout:(48/138): python3-pycparser-2.20-6.el9.noarch.r 2.2 MB/s | 135 kB 00:00 2026-03-20T18:25:00.754 INFO:teuthology.orchestra.run.vm02.stdout:(46/138): python3-cffi-1.14.5-5.el9.x86_64.rpm 4.5 MB/s | 253 kB 00:00 2026-03-20T18:25:00.779 INFO:teuthology.orchestra.run.vm02.stdout:(47/138): python3-cryptography-36.0.1-5.el9.x86 17 MB/s | 1.2 MB 00:00 2026-03-20T18:25:00.801 INFO:teuthology.orchestra.run.vm00.stdout:(49/138): python3-pyparsing-2.4.7-9.el9.noarch. 2.4 MB/s | 150 kB 00:00 2026-03-20T18:25:00.809 INFO:teuthology.orchestra.run.vm02.stdout:(48/138): python3-ply-3.11-14.el9.noarch.rpm 1.9 MB/s | 106 kB 00:00 2026-03-20T18:25:00.826 INFO:teuthology.orchestra.run.vm02.stdout:(49/138): python3-pycparser-2.20-6.el9.noarch.r 2.8 MB/s | 135 kB 00:00 2026-03-20T18:25:00.859 INFO:teuthology.orchestra.run.vm00.stdout:(50/138): python3-requests-2.25.1-10.el9.noarch 2.1 MB/s | 126 kB 00:00 2026-03-20T18:25:00.860 INFO:teuthology.orchestra.run.vm02.stdout:(50/138): python3-pyparsing-2.4.7-9.el9.noarch. 2.9 MB/s | 150 kB 00:00 2026-03-20T18:25:00.888 INFO:teuthology.orchestra.run.vm02.stdout:(51/138): python3-requests-2.25.1-10.el9.noarch 2.0 MB/s | 126 kB 00:00 2026-03-20T18:25:00.923 INFO:teuthology.orchestra.run.vm00.stdout:(51/138): python3-urllib3-1.26.5-7.el9.noarch.r 3.3 MB/s | 218 kB 00:00 2026-03-20T18:25:00.926 INFO:teuthology.orchestra.run.vm02.stdout:(52/138): python3-urllib3-1.26.5-7.el9.noarch.r 3.3 MB/s | 218 kB 00:00 2026-03-20T18:25:00.940 INFO:teuthology.orchestra.run.vm02.stdout:(53/138): unzip-6.0-59.el9.x86_64.rpm 3.5 MB/s | 182 kB 00:00 2026-03-20T18:25:00.981 INFO:teuthology.orchestra.run.vm00.stdout:(52/138): unzip-6.0-59.el9.x86_64.rpm 3.0 MB/s | 182 kB 00:00 2026-03-20T18:25:01.005 INFO:teuthology.orchestra.run.vm02.stdout:(54/138): zip-3.0-35.el9.x86_64.rpm 3.3 MB/s | 266 kB 00:00 2026-03-20T18:25:01.041 INFO:teuthology.orchestra.run.vm00.stdout:(53/138): zip-3.0-35.el9.x86_64.rpm 4.4 MB/s | 266 kB 00:00 2026-03-20T18:25:01.122 INFO:teuthology.orchestra.run.vm02.stdout:(55/138): flexiblas-3.0.4-9.el9.x86_64.rpm 255 kB/s | 30 kB 00:00 2026-03-20T18:25:01.140 INFO:teuthology.orchestra.run.vm02.stdout:(56/138): boost-program-options-1.75.0-13.el9.x 521 kB/s | 104 kB 00:00 2026-03-20T18:25:01.181 INFO:teuthology.orchestra.run.vm02.stdout:(57/138): flexiblas-openblas-openmp-3.0.4-9.el9 359 kB/s | 15 kB 00:00 2026-03-20T18:25:01.262 INFO:teuthology.orchestra.run.vm02.stdout:(58/138): libnbd-1.20.3-4.el9.x86_64.rpm 2.0 MB/s | 164 kB 00:00 2026-03-20T18:25:01.344 INFO:teuthology.orchestra.run.vm02.stdout:(59/138): libpmemobj-1.12.1-1.el9.x86_64.rpm 1.9 MB/s | 160 kB 00:00 2026-03-20T18:25:01.361 INFO:teuthology.orchestra.run.vm00.stdout:(54/138): boost-program-options-1.75.0-13.el9.x 326 kB/s | 104 kB 00:00 2026-03-20T18:25:01.386 INFO:teuthology.orchestra.run.vm02.stdout:(60/138): librabbitmq-0.11.0-7.el9.x86_64.rpm 1.1 MB/s | 45 kB 00:00 2026-03-20T18:25:01.426 INFO:teuthology.orchestra.run.vm00.stdout:(55/138): flexiblas-3.0.4-9.el9.x86_64.rpm 453 kB/s | 30 kB 00:00 2026-03-20T18:25:01.627 INFO:teuthology.orchestra.run.vm02.stdout:(61/138): librdkafka-1.6.1-102.el9.x86_64.rpm 2.7 MB/s | 662 kB 00:00 2026-03-20T18:25:01.709 INFO:teuthology.orchestra.run.vm02.stdout:(62/138): libstoragemgmt-1.10.1-1.el9.x86_64.rp 2.9 MB/s | 246 kB 00:00 2026-03-20T18:25:01.791 INFO:teuthology.orchestra.run.vm02.stdout:(63/138): libxslt-1.1.34-12.el9.x86_64.rpm 2.8 MB/s | 233 kB 00:00 2026-03-20T18:25:01.873 INFO:teuthology.orchestra.run.vm02.stdout:(64/138): lttng-ust-2.12.0-6.el9.x86_64.rpm 3.5 MB/s | 292 kB 00:00 2026-03-20T18:25:01.916 INFO:teuthology.orchestra.run.vm00.stdout:(56/138): flexiblas-netlib-3.0.4-9.el9.x86_64.r 6.1 MB/s | 3.0 MB 00:00 2026-03-20T18:25:01.954 INFO:teuthology.orchestra.run.vm02.stdout:(65/138): lua-5.4.4-4.el9.x86_64.rpm 2.3 MB/s | 188 kB 00:00 2026-03-20T18:25:01.983 INFO:teuthology.orchestra.run.vm00.stdout:(57/138): flexiblas-openblas-openmp-3.0.4-9.el9 225 kB/s | 15 kB 00:00 2026-03-20T18:25:01.996 INFO:teuthology.orchestra.run.vm02.stdout:(66/138): openblas-0.3.29-1.el9.x86_64.rpm 1.0 MB/s | 42 kB 00:00 2026-03-20T18:25:02.057 INFO:teuthology.orchestra.run.vm02.stdout:(67/138): flexiblas-netlib-3.0.4-9.el9.x86_64.r 3.2 MB/s | 3.0 MB 00:00 2026-03-20T18:25:02.058 INFO:teuthology.orchestra.run.vm00.stdout:(58/138): libnbd-1.20.3-4.el9.x86_64.rpm 2.1 MB/s | 164 kB 00:00 2026-03-20T18:25:02.097 INFO:teuthology.orchestra.run.vm02.stdout:(68/138): perl-Benchmark-1.23-483.el9.noarch.rp 658 kB/s | 26 kB 00:00 2026-03-20T18:25:02.135 INFO:teuthology.orchestra.run.vm00.stdout:(59/138): libpmemobj-1.12.1-1.el9.x86_64.rpm 2.1 MB/s | 160 kB 00:00 2026-03-20T18:25:02.200 INFO:teuthology.orchestra.run.vm00.stdout:(60/138): librabbitmq-0.11.0-7.el9.x86_64.rpm 696 kB/s | 45 kB 00:00 2026-03-20T18:25:02.215 INFO:teuthology.orchestra.run.vm02.stdout:(69/138): perl-Test-Harness-3.42-461.el9.noarch 2.5 MB/s | 295 kB 00:00 2026-03-20T18:25:02.282 INFO:teuthology.orchestra.run.vm00.stdout:(61/138): librdkafka-1.6.1-102.el9.x86_64.rpm 8.0 MB/s | 662 kB 00:00 2026-03-20T18:25:02.343 INFO:teuthology.orchestra.run.vm00.stdout:(62/138): ceph-mgr-diskprediction-local-20.2.0- 1.8 MB/s | 7.4 MB 00:04 2026-03-20T18:25:02.362 INFO:teuthology.orchestra.run.vm00.stdout:(63/138): libstoragemgmt-1.10.1-1.el9.x86_64.rp 3.0 MB/s | 246 kB 00:00 2026-03-20T18:25:02.439 INFO:teuthology.orchestra.run.vm00.stdout:(64/138): lttng-ust-2.12.0-6.el9.x86_64.rpm 3.7 MB/s | 292 kB 00:00 2026-03-20T18:25:02.515 INFO:teuthology.orchestra.run.vm00.stdout:(65/138): lua-5.4.4-4.el9.x86_64.rpm 2.4 MB/s | 188 kB 00:00 2026-03-20T18:25:02.525 INFO:teuthology.orchestra.run.vm02.stdout:(70/138): protobuf-3.14.0-17.el9.x86_64.rpm 3.2 MB/s | 1.0 MB 00:00 2026-03-20T18:25:02.581 INFO:teuthology.orchestra.run.vm00.stdout:(66/138): openblas-0.3.29-1.el9.x86_64.rpm 636 kB/s | 42 kB 00:00 2026-03-20T18:25:02.768 INFO:teuthology.orchestra.run.vm02.stdout:(71/138): openblas-openmp-0.3.29-1.el9.x86_64.r 6.8 MB/s | 5.3 MB 00:00 2026-03-20T18:25:02.791 INFO:teuthology.orchestra.run.vm00.stdout:(67/138): libxslt-1.1.34-12.el9.x86_64.rpm 521 kB/s | 233 kB 00:00 2026-03-20T18:25:02.812 INFO:teuthology.orchestra.run.vm02.stdout:(72/138): python3-devel-3.9.25-3.el9.x86_64.rpm 5.5 MB/s | 244 kB 00:00 2026-03-20T18:25:02.856 INFO:teuthology.orchestra.run.vm02.stdout:(73/138): python3-jinja2-2.11.3-8.el9.noarch.rp 5.5 MB/s | 249 kB 00:00 2026-03-20T18:25:02.857 INFO:teuthology.orchestra.run.vm00.stdout:(68/138): perl-Benchmark-1.23-483.el9.noarch.rp 401 kB/s | 26 kB 00:00 2026-03-20T18:25:02.899 INFO:teuthology.orchestra.run.vm02.stdout:(74/138): python3-jmespath-1.0.1-1.el9.noarch.r 1.1 MB/s | 48 kB 00:00 2026-03-20T18:25:02.921 INFO:teuthology.orchestra.run.vm00.stdout:(69/138): openblas-openmp-0.3.29-1.el9.x86_64.r 16 MB/s | 5.3 MB 00:00 2026-03-20T18:25:02.941 INFO:teuthology.orchestra.run.vm02.stdout:(75/138): python3-libstoragemgmt-1.10.1-1.el9.x 4.1 MB/s | 177 kB 00:00 2026-03-20T18:25:02.983 INFO:teuthology.orchestra.run.vm02.stdout:(76/138): python3-markupsafe-1.1.1-12.el9.x86_6 846 kB/s | 35 kB 00:00 2026-03-20T18:25:03.003 INFO:teuthology.orchestra.run.vm00.stdout:(70/138): protobuf-3.14.0-17.el9.x86_64.rpm 12 MB/s | 1.0 MB 00:00 2026-03-20T18:25:03.051 INFO:teuthology.orchestra.run.vm00.stdout:(71/138): perl-Test-Harness-3.42-461.el9.noarch 1.5 MB/s | 295 kB 00:00 2026-03-20T18:25:03.240 INFO:teuthology.orchestra.run.vm00.stdout:(72/138): python3-devel-3.9.25-3.el9.x86_64.rpm 1.3 MB/s | 244 kB 00:00 2026-03-20T18:25:03.375 INFO:teuthology.orchestra.run.vm00.stdout:(73/138): python3-jinja2-2.11.3-8.el9.noarch.rp 1.8 MB/s | 249 kB 00:00 2026-03-20T18:25:03.439 INFO:teuthology.orchestra.run.vm00.stdout:(74/138): python3-babel-2.9.1-2.el9.noarch.rpm 14 MB/s | 6.0 MB 00:00 2026-03-20T18:25:03.441 INFO:teuthology.orchestra.run.vm00.stdout:(75/138): python3-jmespath-1.0.1-1.el9.noarch.r 721 kB/s | 48 kB 00:00 2026-03-20T18:25:03.475 INFO:teuthology.orchestra.run.vm02.stdout:(77/138): python3-numpy-1.23.5-2.el9.x86_64.rpm 12 MB/s | 6.1 MB 00:00 2026-03-20T18:25:03.508 INFO:teuthology.orchestra.run.vm00.stdout:(76/138): python3-libstoragemgmt-1.10.1-1.el9.x 2.5 MB/s | 177 kB 00:00 2026-03-20T18:25:03.509 INFO:teuthology.orchestra.run.vm00.stdout:(77/138): python3-markupsafe-1.1.1-12.el9.x86_6 510 kB/s | 35 kB 00:00 2026-03-20T18:25:03.520 INFO:teuthology.orchestra.run.vm02.stdout:(78/138): python3-numpy-f2py-1.23.5-2.el9.x86_6 9.5 MB/s | 442 kB 00:00 2026-03-20T18:25:03.562 INFO:teuthology.orchestra.run.vm02.stdout:(79/138): python3-packaging-20.9-5.el9.noarch.r 1.8 MB/s | 77 kB 00:00 2026-03-20T18:25:03.606 INFO:teuthology.orchestra.run.vm02.stdout:(80/138): python3-protobuf-3.14.0-17.el9.noarch 6.0 MB/s | 267 kB 00:00 2026-03-20T18:25:03.649 INFO:teuthology.orchestra.run.vm02.stdout:(81/138): python3-pyasn1-0.4.8-7.el9.noarch.rpm 3.6 MB/s | 157 kB 00:00 2026-03-20T18:25:03.694 INFO:teuthology.orchestra.run.vm02.stdout:(82/138): python3-pyasn1-modules-0.4.8-7.el9.no 6.1 MB/s | 277 kB 00:00 2026-03-20T18:25:03.733 INFO:teuthology.orchestra.run.vm00.stdout:(78/138): python3-numpy-f2py-1.23.5-2.el9.x86_6 1.9 MB/s | 442 kB 00:00 2026-03-20T18:25:03.736 INFO:teuthology.orchestra.run.vm02.stdout:(83/138): python3-requests-oauthlib-1.3.0-12.el 1.3 MB/s | 54 kB 00:00 2026-03-20T18:25:03.801 INFO:teuthology.orchestra.run.vm00.stdout:(79/138): python3-packaging-20.9-5.el9.noarch.r 1.1 MB/s | 77 kB 00:00 2026-03-20T18:25:03.934 INFO:teuthology.orchestra.run.vm00.stdout:(80/138): python3-protobuf-3.14.0-17.el9.noarch 2.0 MB/s | 267 kB 00:00 2026-03-20T18:25:04.052 INFO:teuthology.orchestra.run.vm00.stdout:(81/138): python3-numpy-1.23.5-2.el9.x86_64.rpm 11 MB/s | 6.1 MB 00:00 2026-03-20T18:25:04.064 INFO:teuthology.orchestra.run.vm00.stdout:(82/138): python3-pyasn1-0.4.8-7.el9.noarch.rpm 1.2 MB/s | 157 kB 00:00 2026-03-20T18:25:04.139 INFO:teuthology.orchestra.run.vm00.stdout:(83/138): python3-pyasn1-modules-0.4.8-7.el9.no 3.1 MB/s | 277 kB 00:00 2026-03-20T18:25:04.146 INFO:teuthology.orchestra.run.vm00.stdout:(84/138): python3-requests-oauthlib-1.3.0-12.el 659 kB/s | 54 kB 00:00 2026-03-20T18:25:04.256 INFO:teuthology.orchestra.run.vm00.stdout:(85/138): python3-toml-0.10.2-6.el9.noarch.rpm 380 kB/s | 42 kB 00:00 2026-03-20T18:25:04.298 INFO:teuthology.orchestra.run.vm02.stdout:(84/138): python3-babel-2.9.1-2.el9.noarch.rpm 3.4 MB/s | 6.0 MB 00:01 2026-03-20T18:25:04.342 INFO:teuthology.orchestra.run.vm02.stdout:(85/138): python3-toml-0.10.2-6.el9.noarch.rpm 958 kB/s | 42 kB 00:00 2026-03-20T18:25:04.390 INFO:teuthology.orchestra.run.vm00.stdout:(86/138): qatlib-25.08.0-2.el9.x86_64.rpm 1.8 MB/s | 240 kB 00:00 2026-03-20T18:25:04.421 INFO:teuthology.orchestra.run.vm02.stdout:(86/138): qatlib-25.08.0-2.el9.x86_64.rpm 3.0 MB/s | 240 kB 00:00 2026-03-20T18:25:04.457 INFO:teuthology.orchestra.run.vm00.stdout:(87/138): qatlib-service-25.08.0-2.el9.x86_64.r 554 kB/s | 37 kB 00:00 2026-03-20T18:25:04.461 INFO:teuthology.orchestra.run.vm02.stdout:(87/138): qatlib-service-25.08.0-2.el9.x86_64.r 926 kB/s | 37 kB 00:00 2026-03-20T18:25:04.519 INFO:teuthology.orchestra.run.vm02.stdout:(88/138): python3-scipy-1.9.3-2.el9.x86_64.rpm 25 MB/s | 19 MB 00:00 2026-03-20T18:25:04.520 INFO:teuthology.orchestra.run.vm02.stdout:(89/138): qatzip-libs-1.3.1-1.el9.x86_64.rpm 1.1 MB/s | 66 kB 00:00 2026-03-20T18:25:04.525 INFO:teuthology.orchestra.run.vm00.stdout:(88/138): qatzip-libs-1.3.1-1.el9.x86_64.rpm 983 kB/s | 66 kB 00:00 2026-03-20T18:25:04.571 INFO:teuthology.orchestra.run.vm02.stdout:(90/138): socat-1.7.4.1-8.el9.x86_64.rpm 5.7 MB/s | 303 kB 00:00 2026-03-20T18:25:04.572 INFO:teuthology.orchestra.run.vm02.stdout:(91/138): xmlstarlet-1.6.1-20.el9.x86_64.rpm 1.2 MB/s | 64 kB 00:00 2026-03-20T18:25:04.661 INFO:teuthology.orchestra.run.vm00.stdout:(89/138): socat-1.7.4.1-8.el9.x86_64.rpm 2.2 MB/s | 303 kB 00:00 2026-03-20T18:25:04.708 INFO:teuthology.orchestra.run.vm02.stdout:(92/138): lua-devel-5.4.4-4.el9.x86_64.rpm 164 kB/s | 22 kB 00:00 2026-03-20T18:25:04.727 INFO:teuthology.orchestra.run.vm02.stdout:(93/138): abseil-cpp-20211102.0-4.el9.x86_64.rp 28 MB/s | 551 kB 00:00 2026-03-20T18:25:04.734 INFO:teuthology.orchestra.run.vm02.stdout:(94/138): gperftools-libs-2.9.1-3.el9.x86_64.rp 43 MB/s | 308 kB 00:00 2026-03-20T18:25:04.735 INFO:teuthology.orchestra.run.vm00.stdout:(90/138): xmlstarlet-1.6.1-20.el9.x86_64.rpm 858 kB/s | 64 kB 00:00 2026-03-20T18:25:04.736 INFO:teuthology.orchestra.run.vm02.stdout:(95/138): grpc-data-1.46.7-10.el9.noarch.rpm 8.4 MB/s | 19 kB 00:00 2026-03-20T18:25:04.806 INFO:teuthology.orchestra.run.vm02.stdout:(96/138): libarrow-9.0.0-15.el9.x86_64.rpm 64 MB/s | 4.4 MB 00:00 2026-03-20T18:25:04.809 INFO:teuthology.orchestra.run.vm02.stdout:(97/138): protobuf-compiler-3.14.0-17.el9.x86_6 3.6 MB/s | 862 kB 00:00 2026-03-20T18:25:04.810 INFO:teuthology.orchestra.run.vm02.stdout:(98/138): libarrow-doc-9.0.0-15.el9.noarch.rpm 6.1 MB/s | 25 kB 00:00 2026-03-20T18:25:04.814 INFO:teuthology.orchestra.run.vm02.stdout:(99/138): libunwind-1.6.2-1.el9.x86_64.rpm 19 MB/s | 67 kB 00:00 2026-03-20T18:25:04.815 INFO:teuthology.orchestra.run.vm02.stdout:(100/138): liboath-2.6.12-1.el9.x86_64.rpm 7.9 MB/s | 49 kB 00:00 2026-03-20T18:25:04.819 INFO:teuthology.orchestra.run.vm02.stdout:(101/138): luarocks-3.9.2-5.el9.noarch.rpm 33 MB/s | 151 kB 00:00 2026-03-20T18:25:04.829 INFO:teuthology.orchestra.run.vm02.stdout:(102/138): python3-asyncssh-2.13.2-5.el9.noarch 56 MB/s | 548 kB 00:00 2026-03-20T18:25:04.832 INFO:teuthology.orchestra.run.vm02.stdout:(103/138): python3-autocommand-2.2.2-8.el9.noar 11 MB/s | 29 kB 00:00 2026-03-20T18:25:04.835 INFO:teuthology.orchestra.run.vm02.stdout:(104/138): python3-backports-tarfile-1.2.0-1.el 19 MB/s | 60 kB 00:00 2026-03-20T18:25:04.840 INFO:teuthology.orchestra.run.vm02.stdout:(105/138): parquet-libs-9.0.0-15.el9.x86_64.rpm 33 MB/s | 838 kB 00:00 2026-03-20T18:25:04.842 INFO:teuthology.orchestra.run.vm02.stdout:(106/138): python3-bcrypt-3.2.2-1.el9.x86_64.rp 6.3 MB/s | 43 kB 00:00 2026-03-20T18:25:04.843 INFO:teuthology.orchestra.run.vm02.stdout:(107/138): python3-cachetools-4.2.4-1.el9.noarc 13 MB/s | 32 kB 00:00 2026-03-20T18:25:04.844 INFO:teuthology.orchestra.run.vm02.stdout:(108/138): python3-certifi-2023.05.07-4.el9.noa 7.1 MB/s | 14 kB 00:00 2026-03-20T18:25:04.852 INFO:teuthology.orchestra.run.vm02.stdout:(109/138): python3-cheroot-10.0.1-4.el9.noarch. 19 MB/s | 173 kB 00:00 2026-03-20T18:25:04.854 INFO:teuthology.orchestra.run.vm02.stdout:(110/138): python3-cherrypy-18.6.1-2.el9.noarch 37 MB/s | 358 kB 00:00 2026-03-20T18:25:04.858 INFO:teuthology.orchestra.run.vm02.stdout:(111/138): python3-google-auth-2.45.0-1.el9.noa 46 MB/s | 254 kB 00:00 2026-03-20T18:25:04.862 INFO:teuthology.orchestra.run.vm02.stdout:(112/138): python3-grpcio-tools-1.46.7-10.el9.x 33 MB/s | 144 kB 00:00 2026-03-20T18:25:04.865 INFO:teuthology.orchestra.run.vm02.stdout:(113/138): python3-jaraco-8.2.1-3.el9.noarch.rp 3.5 MB/s | 11 kB 00:00 2026-03-20T18:25:04.868 INFO:teuthology.orchestra.run.vm02.stdout:(114/138): python3-jaraco-classes-3.2.1-5.el9.n 6.7 MB/s | 18 kB 00:00 2026-03-20T18:25:04.871 INFO:teuthology.orchestra.run.vm02.stdout:(115/138): python3-jaraco-collections-3.0.0-8.e 9.1 MB/s | 23 kB 00:00 2026-03-20T18:25:04.874 INFO:teuthology.orchestra.run.vm02.stdout:(116/138): python3-jaraco-context-6.0.1-3.el9.n 6.9 MB/s | 20 kB 00:00 2026-03-20T18:25:04.877 INFO:teuthology.orchestra.run.vm02.stdout:(117/138): python3-jaraco-functools-3.5.0-2.el9 6.9 MB/s | 19 kB 00:00 2026-03-20T18:25:04.882 INFO:teuthology.orchestra.run.vm02.stdout:(118/138): python3-jaraco-text-4.0.0-2.el9.noar 6.0 MB/s | 26 kB 00:00 2026-03-20T18:25:04.885 INFO:teuthology.orchestra.run.vm00.stdout:(91/138): lua-devel-5.4.4-4.el9.x86_64.rpm 149 kB/s | 22 kB 00:00 2026-03-20T18:25:04.891 INFO:teuthology.orchestra.run.vm02.stdout:(119/138): python3-grpcio-1.46.7-10.el9.x86_64. 56 MB/s | 2.0 MB 00:00 2026-03-20T18:25:04.900 INFO:teuthology.orchestra.run.vm02.stdout:(120/138): python3-kubernetes-26.1.0-3.el9.noar 58 MB/s | 1.0 MB 00:00 2026-03-20T18:25:04.902 INFO:teuthology.orchestra.run.vm02.stdout:(121/138): python3-natsort-7.1.1-5.el9.noarch.r 22 MB/s | 58 kB 00:00 2026-03-20T18:25:04.905 INFO:teuthology.orchestra.run.vm02.stdout:(122/138): python3-portend-3.1.0-2.el9.noarch.r 7.5 MB/s | 16 kB 00:00 2026-03-20T18:25:04.908 INFO:teuthology.orchestra.run.vm02.stdout:(123/138): python3-pyOpenSSL-21.0.0-1.el9.noarc 26 MB/s | 90 kB 00:00 2026-03-20T18:25:04.911 INFO:teuthology.orchestra.run.vm02.stdout:(124/138): python3-repoze-lru-0.7-16.el9.noarch 11 MB/s | 31 kB 00:00 2026-03-20T18:25:04.917 INFO:teuthology.orchestra.run.vm02.stdout:(125/138): python3-routes-2.5.1-5.el9.noarch.rp 36 MB/s | 188 kB 00:00 2026-03-20T18:25:04.921 INFO:teuthology.orchestra.run.vm02.stdout:(126/138): python3-rsa-4.9-2.el9.noarch.rpm 17 MB/s | 59 kB 00:00 2026-03-20T18:25:04.924 INFO:teuthology.orchestra.run.vm02.stdout:(127/138): python3-tempora-5.0.0-2.el9.noarch.r 12 MB/s | 36 kB 00:00 2026-03-20T18:25:04.927 INFO:teuthology.orchestra.run.vm02.stdout:(128/138): python3-typing-extensions-4.15.0-1.e 26 MB/s | 86 kB 00:00 2026-03-20T18:25:04.931 INFO:teuthology.orchestra.run.vm02.stdout:(129/138): python3-websocket-client-1.2.3-2.el9 21 MB/s | 90 kB 00:00 2026-03-20T18:25:04.936 INFO:teuthology.orchestra.run.vm02.stdout:(130/138): python3-xmltodict-0.12.0-15.el9.noar 5.2 MB/s | 22 kB 00:00 2026-03-20T18:25:04.938 INFO:teuthology.orchestra.run.vm02.stdout:(131/138): python3-zc-lockfile-2.0-10.el9.noarc 9.4 MB/s | 20 kB 00:00 2026-03-20T18:25:04.944 INFO:teuthology.orchestra.run.vm02.stdout:(132/138): re2-20211101-20.el9.x86_64.rpm 34 MB/s | 191 kB 00:00 2026-03-20T18:25:04.949 INFO:teuthology.orchestra.run.vm02.stdout:(133/138): s3cmd-2.4.0-1.el9.noarch.rpm 38 MB/s | 206 kB 00:00 2026-03-20T18:25:04.972 INFO:teuthology.orchestra.run.vm02.stdout:(134/138): thrift-0.15.0-4.el9.x86_64.rpm 73 MB/s | 1.6 MB 00:00 2026-03-20T18:25:05.109 INFO:teuthology.orchestra.run.vm02.stdout:(135/138): python3-more-itertools-8.12.0-2.el9. 363 kB/s | 79 kB 00:00 2026-03-20T18:25:05.125 INFO:teuthology.orchestra.run.vm00.stdout:(92/138): protobuf-compiler-3.14.0-17.el9.x86_6 3.5 MB/s | 862 kB 00:00 2026-03-20T18:25:05.140 INFO:teuthology.orchestra.run.vm00.stdout:(93/138): abseil-cpp-20211102.0-4.el9.x86_64.rp 37 MB/s | 551 kB 00:00 2026-03-20T18:25:05.147 INFO:teuthology.orchestra.run.vm00.stdout:(94/138): gperftools-libs-2.9.1-3.el9.x86_64.rp 44 MB/s | 308 kB 00:00 2026-03-20T18:25:05.149 INFO:teuthology.orchestra.run.vm00.stdout:(95/138): grpc-data-1.46.7-10.el9.noarch.rpm 9.3 MB/s | 19 kB 00:00 2026-03-20T18:25:05.212 INFO:teuthology.orchestra.run.vm00.stdout:(96/138): libarrow-9.0.0-15.el9.x86_64.rpm 70 MB/s | 4.4 MB 00:00 2026-03-20T18:25:05.215 INFO:teuthology.orchestra.run.vm00.stdout:(97/138): libarrow-doc-9.0.0-15.el9.noarch.rpm 11 MB/s | 25 kB 00:00 2026-03-20T18:25:05.218 INFO:teuthology.orchestra.run.vm00.stdout:(98/138): liboath-2.6.12-1.el9.x86_64.rpm 16 MB/s | 49 kB 00:00 2026-03-20T18:25:05.222 INFO:teuthology.orchestra.run.vm00.stdout:(99/138): libunwind-1.6.2-1.el9.x86_64.rpm 20 MB/s | 67 kB 00:00 2026-03-20T18:25:05.227 INFO:teuthology.orchestra.run.vm00.stdout:(100/138): luarocks-3.9.2-5.el9.noarch.rpm 31 MB/s | 151 kB 00:00 2026-03-20T18:25:05.242 INFO:teuthology.orchestra.run.vm00.stdout:(101/138): parquet-libs-9.0.0-15.el9.x86_64.rpm 56 MB/s | 838 kB 00:00 2026-03-20T18:25:05.251 INFO:teuthology.orchestra.run.vm00.stdout:(102/138): python3-asyncssh-2.13.2-5.el9.noarch 59 MB/s | 548 kB 00:00 2026-03-20T18:25:05.254 INFO:teuthology.orchestra.run.vm00.stdout:(103/138): python3-autocommand-2.2.2-8.el9.noar 11 MB/s | 29 kB 00:00 2026-03-20T18:25:05.257 INFO:teuthology.orchestra.run.vm00.stdout:(104/138): python3-backports-tarfile-1.2.0-1.el 22 MB/s | 60 kB 00:00 2026-03-20T18:25:05.260 INFO:teuthology.orchestra.run.vm00.stdout:(105/138): python3-bcrypt-3.2.2-1.el9.x86_64.rp 13 MB/s | 43 kB 00:00 2026-03-20T18:25:05.263 INFO:teuthology.orchestra.run.vm00.stdout:(106/138): python3-cachetools-4.2.4-1.el9.noarc 13 MB/s | 32 kB 00:00 2026-03-20T18:25:05.265 INFO:teuthology.orchestra.run.vm00.stdout:(107/138): python3-certifi-2023.05.07-4.el9.noa 6.9 MB/s | 14 kB 00:00 2026-03-20T18:25:05.270 INFO:teuthology.orchestra.run.vm00.stdout:(108/138): python3-cheroot-10.0.1-4.el9.noarch. 35 MB/s | 173 kB 00:00 2026-03-20T18:25:05.276 INFO:teuthology.orchestra.run.vm00.stdout:(109/138): python3-cherrypy-18.6.1-2.el9.noarch 55 MB/s | 358 kB 00:00 2026-03-20T18:25:05.282 INFO:teuthology.orchestra.run.vm00.stdout:(110/138): python3-google-auth-2.45.0-1.el9.noa 42 MB/s | 254 kB 00:00 2026-03-20T18:25:05.313 INFO:teuthology.orchestra.run.vm00.stdout:(111/138): python3-grpcio-1.46.7-10.el9.x86_64. 67 MB/s | 2.0 MB 00:00 2026-03-20T18:25:05.317 INFO:teuthology.orchestra.run.vm00.stdout:(112/138): python3-grpcio-tools-1.46.7-10.el9.x 34 MB/s | 144 kB 00:00 2026-03-20T18:25:05.319 INFO:teuthology.orchestra.run.vm00.stdout:(113/138): python3-jaraco-8.2.1-3.el9.noarch.rp 5.5 MB/s | 11 kB 00:00 2026-03-20T18:25:05.322 INFO:teuthology.orchestra.run.vm00.stdout:(114/138): python3-jaraco-classes-3.2.1-5.el9.n 7.1 MB/s | 18 kB 00:00 2026-03-20T18:25:05.325 INFO:teuthology.orchestra.run.vm00.stdout:(115/138): python3-jaraco-collections-3.0.0-8.e 7.3 MB/s | 23 kB 00:00 2026-03-20T18:25:05.327 INFO:teuthology.orchestra.run.vm00.stdout:(116/138): python3-jaraco-context-6.0.1-3.el9.n 9.0 MB/s | 20 kB 00:00 2026-03-20T18:25:05.329 INFO:teuthology.orchestra.run.vm00.stdout:(117/138): python3-jaraco-functools-3.5.0-2.el9 9.4 MB/s | 19 kB 00:00 2026-03-20T18:25:05.332 INFO:teuthology.orchestra.run.vm00.stdout:(118/138): python3-jaraco-text-4.0.0-2.el9.noar 10 MB/s | 26 kB 00:00 2026-03-20T18:25:05.347 INFO:teuthology.orchestra.run.vm00.stdout:(119/138): python3-kubernetes-26.1.0-3.el9.noar 71 MB/s | 1.0 MB 00:00 2026-03-20T18:25:05.350 INFO:teuthology.orchestra.run.vm00.stdout:(120/138): python3-more-itertools-8.12.0-2.el9. 22 MB/s | 79 kB 00:00 2026-03-20T18:25:05.354 INFO:teuthology.orchestra.run.vm00.stdout:(121/138): python3-natsort-7.1.1-5.el9.noarch.r 16 MB/s | 58 kB 00:00 2026-03-20T18:25:05.356 INFO:teuthology.orchestra.run.vm00.stdout:(122/138): python3-portend-3.1.0-2.el9.noarch.r 7.9 MB/s | 16 kB 00:00 2026-03-20T18:25:05.360 INFO:teuthology.orchestra.run.vm00.stdout:(123/138): python3-pyOpenSSL-21.0.0-1.el9.noarc 25 MB/s | 90 kB 00:00 2026-03-20T18:25:05.364 INFO:teuthology.orchestra.run.vm00.stdout:(124/138): python3-repoze-lru-0.7-16.el9.noarch 6.7 MB/s | 31 kB 00:00 2026-03-20T18:25:05.371 INFO:teuthology.orchestra.run.vm00.stdout:(125/138): python3-routes-2.5.1-5.el9.noarch.rp 29 MB/s | 188 kB 00:00 2026-03-20T18:25:05.373 INFO:teuthology.orchestra.run.vm00.stdout:(126/138): python3-rsa-4.9-2.el9.noarch.rpm 24 MB/s | 59 kB 00:00 2026-03-20T18:25:05.377 INFO:teuthology.orchestra.run.vm00.stdout:(127/138): python3-tempora-5.0.0-2.el9.noarch.r 10 MB/s | 36 kB 00:00 2026-03-20T18:25:05.380 INFO:teuthology.orchestra.run.vm00.stdout:(128/138): python3-typing-extensions-4.15.0-1.e 25 MB/s | 86 kB 00:00 2026-03-20T18:25:05.384 INFO:teuthology.orchestra.run.vm00.stdout:(129/138): python3-websocket-client-1.2.3-2.el9 23 MB/s | 90 kB 00:00 2026-03-20T18:25:05.387 INFO:teuthology.orchestra.run.vm00.stdout:(130/138): python3-xmltodict-0.12.0-15.el9.noar 7.7 MB/s | 22 kB 00:00 2026-03-20T18:25:05.389 INFO:teuthology.orchestra.run.vm00.stdout:(131/138): python3-zc-lockfile-2.0-10.el9.noarc 10 MB/s | 20 kB 00:00 2026-03-20T18:25:05.395 INFO:teuthology.orchestra.run.vm00.stdout:(132/138): re2-20211101-20.el9.x86_64.rpm 38 MB/s | 191 kB 00:00 2026-03-20T18:25:05.399 INFO:teuthology.orchestra.run.vm00.stdout:(133/138): s3cmd-2.4.0-1.el9.noarch.rpm 44 MB/s | 206 kB 00:00 2026-03-20T18:25:05.421 INFO:teuthology.orchestra.run.vm00.stdout:(134/138): thrift-0.15.0-4.el9.x86_64.rpm 72 MB/s | 1.6 MB 00:00 2026-03-20T18:25:05.849 INFO:teuthology.orchestra.run.vm00.stdout:(135/138): python3-scipy-1.9.3-2.el9.x86_64.rpm 11 MB/s | 19 MB 00:01 2026-03-20T18:25:07.427 INFO:teuthology.orchestra.run.vm00.stdout:(136/138): librados2-20.2.0-712.g70f8415b.el9.x 1.8 MB/s | 3.5 MB 00:02 2026-03-20T18:25:07.956 INFO:teuthology.orchestra.run.vm02.stdout:(136/138): librados2-20.2.0-712.g70f8415b.el9.x 1.2 MB/s | 3.5 MB 00:02 2026-03-20T18:25:08.140 INFO:teuthology.orchestra.run.vm02.stdout:(137/138): librbd1-20.2.0-712.g70f8415b.el9.x86 961 kB/s | 2.8 MB 00:03 2026-03-20T18:25:08.613 INFO:teuthology.orchestra.run.vm00.stdout:(137/138): librbd1-20.2.0-712.g70f8415b.el9.x86 1.0 MB/s | 2.8 MB 00:02 2026-03-20T18:25:23.722 INFO:teuthology.orchestra.run.vm02.stdout:(138/138): ceph-test-20.2.0-712.g70f8415b.el9.x 2.5 MB/s | 84 MB 00:34 2026-03-20T18:25:23.726 INFO:teuthology.orchestra.run.vm02.stdout:-------------------------------------------------------------------------------- 2026-03-20T18:25:23.727 INFO:teuthology.orchestra.run.vm02.stdout:Total 7.0 MB/s | 267 MB 00:38 2026-03-20T18:25:24.418 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction check 2026-03-20T18:25:24.484 INFO:teuthology.orchestra.run.vm02.stdout:Transaction check succeeded. 2026-03-20T18:25:24.484 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction test 2026-03-20T18:25:25.563 INFO:teuthology.orchestra.run.vm02.stdout:Transaction test succeeded. 2026-03-20T18:25:25.563 INFO:teuthology.orchestra.run.vm02.stdout:Running transaction 2026-03-20T18:25:26.680 INFO:teuthology.orchestra.run.vm02.stdout: Preparing : 1/1 2026-03-20T18:25:26.696 INFO:teuthology.orchestra.run.vm02.stdout: Installing : thrift-0.15.0-4.el9.x86_64 1/140 2026-03-20T18:25:26.702 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 2/140 2026-03-20T18:25:26.716 INFO:teuthology.orchestra.run.vm02.stdout: Installing : liboath-2.6.12-1.el9.x86_64 3/140 2026-03-20T18:25:26.897 INFO:teuthology.orchestra.run.vm02.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 4/140 2026-03-20T18:25:26.899 INFO:teuthology.orchestra.run.vm02.stdout: Upgrading : librados2-2:20.2.0-712.g70f8415b.el9.x86_64 5/140 2026-03-20T18:25:26.936 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librados2-2:20.2.0-712.g70f8415b.el9.x86_64 5/140 2026-03-20T18:25:26.945 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 6/140 2026-03-20T18:25:26.950 INFO:teuthology.orchestra.run.vm02.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/140 2026-03-20T18:25:26.954 INFO:teuthology.orchestra.run.vm02.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/140 2026-03-20T18:25:26.956 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 9/140 2026-03-20T18:25:26.962 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 10/140 2026-03-20T18:25:27.111 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 11/140 2026-03-20T18:25:27.114 INFO:teuthology.orchestra.run.vm02.stdout: Upgrading : librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T18:25:27.136 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T18:25:27.138 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 13/140 2026-03-20T18:25:27.165 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 13/140 2026-03-20T18:25:27.166 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T18:25:27.180 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T18:25:27.214 INFO:teuthology.orchestra.run.vm02.stdout: Installing : re2-1:20211101-20.el9.x86_64 15/140 2026-03-20T18:25:27.243 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 16/140 2026-03-20T18:25:27.260 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/140 2026-03-20T18:25:27.268 INFO:teuthology.orchestra.run.vm02.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 18/140 2026-03-20T18:25:27.273 INFO:teuthology.orchestra.run.vm02.stdout: Installing : lua-5.4.4-4.el9.x86_64 19/140 2026-03-20T18:25:27.279 INFO:teuthology.orchestra.run.vm02.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 20/140 2026-03-20T18:25:27.310 INFO:teuthology.orchestra.run.vm02.stdout: Installing : unzip-6.0-59.el9.x86_64 21/140 2026-03-20T18:25:27.331 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 22/140 2026-03-20T18:25:27.337 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 23/140 2026-03-20T18:25:27.345 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 24/140 2026-03-20T18:25:27.349 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 25/140 2026-03-20T18:25:27.395 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 26/140 2026-03-20T18:25:27.405 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x 27/140 2026-03-20T18:25:27.409 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9 28/140 2026-03-20T18:25:27.411 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 29/140 2026-03-20T18:25:27.472 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 29/140 2026-03-20T18:25:27.474 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 30/140 2026-03-20T18:25:27.498 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 30/140 2026-03-20T18:25:27.514 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 31/140 2026-03-20T18:25:27.524 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/140 2026-03-20T18:25:27.556 INFO:teuthology.orchestra.run.vm02.stdout: Installing : zip-3.0-35.el9.x86_64 33/140 2026-03-20T18:25:27.562 INFO:teuthology.orchestra.run.vm02.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/140 2026-03-20T18:25:27.571 INFO:teuthology.orchestra.run.vm02.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/140 2026-03-20T18:25:27.637 INFO:teuthology.orchestra.run.vm02.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/140 2026-03-20T18:25:27.654 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 37/140 2026-03-20T18:25:27.674 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-rsa-4.9-2.el9.noarch 38/140 2026-03-20T18:25:27.681 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 39/140 2026-03-20T18:25:27.690 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/140 2026-03-20T18:25:27.697 INFO:teuthology.orchestra.run.vm02.stdout: Installing : librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 41/140 2026-03-20T18:25:27.701 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/140 2026-03-20T18:25:27.720 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/140 2026-03-20T18:25:27.729 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/140 2026-03-20T18:25:27.738 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 45/140 2026-03-20T18:25:27.752 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 46/140 2026-03-20T18:25:27.765 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 47/140 2026-03-20T18:25:27.771 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 48/140 2026-03-20T18:25:27.785 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 49/140 2026-03-20T18:25:27.837 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 50/140 2026-03-20T18:25:28.308 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 51/140 2026-03-20T18:25:28.324 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 52/140 2026-03-20T18:25:28.330 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 53/140 2026-03-20T18:25:28.337 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 54/140 2026-03-20T18:25:28.342 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 55/140 2026-03-20T18:25:28.350 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 56/140 2026-03-20T18:25:28.353 INFO:teuthology.orchestra.run.vm02.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 57/140 2026-03-20T18:25:28.356 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 58/140 2026-03-20T18:25:28.398 INFO:teuthology.orchestra.run.vm02.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 59/140 2026-03-20T18:25:28.459 INFO:teuthology.orchestra.run.vm02.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 60/140 2026-03-20T18:25:28.479 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 61/140 2026-03-20T18:25:28.488 INFO:teuthology.orchestra.run.vm02.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 62/140 2026-03-20T18:25:28.495 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 63/140 2026-03-20T18:25:28.504 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 64/140 2026-03-20T18:25:28.510 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 65/140 2026-03-20T18:25:28.522 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 66/140 2026-03-20T18:25:28.528 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 67/140 2026-03-20T18:25:28.582 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 68/140 2026-03-20T18:25:28.601 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 69/140 2026-03-20T18:25:28.611 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 70/140 2026-03-20T18:25:28.621 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 71/140 2026-03-20T18:25:28.678 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 72/140 2026-03-20T18:25:28.777 INFO:teuthology.orchestra.run.vm00.stdout:(138/138): ceph-test-20.2.0-712.g70f8415b.el9.x 2.4 MB/s | 84 MB 00:35 2026-03-20T18:25:28.780 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------------------------------------------------------- 2026-03-20T18:25:28.780 INFO:teuthology.orchestra.run.vm00.stdout:Total 6.6 MB/s | 267 MB 00:40 2026-03-20T18:25:28.993 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 73/140 2026-03-20T18:25:29.025 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 74/140 2026-03-20T18:25:29.030 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 75/140 2026-03-20T18:25:29.034 INFO:teuthology.orchestra.run.vm02.stdout: Installing : perl-Benchmark-1.23-483.el9.noarch 76/140 2026-03-20T18:25:29.101 INFO:teuthology.orchestra.run.vm02.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/140 2026-03-20T18:25:29.104 INFO:teuthology.orchestra.run.vm02.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/140 2026-03-20T18:25:29.130 INFO:teuthology.orchestra.run.vm02.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/140 2026-03-20T18:25:29.373 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-20T18:25:29.442 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-20T18:25:29.442 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-20T18:25:29.548 INFO:teuthology.orchestra.run.vm02.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/140 2026-03-20T18:25:29.640 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/140 2026-03-20T18:25:30.504 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-20T18:25:30.504 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-20T18:25:30.584 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/140 2026-03-20T18:25:30.614 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/140 2026-03-20T18:25:30.620 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/140 2026-03-20T18:25:30.624 INFO:teuthology.orchestra.run.vm02.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/140 2026-03-20T18:25:30.631 INFO:teuthology.orchestra.run.vm02.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 86/140 2026-03-20T18:25:30.985 INFO:teuthology.orchestra.run.vm02.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 87/140 2026-03-20T18:25:30.987 INFO:teuthology.orchestra.run.vm02.stdout: Installing : librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 88/140 2026-03-20T18:25:31.015 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 88/140 2026-03-20T18:25:31.017 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 89/140 2026-03-20T18:25:31.720 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-20T18:25:31.730 INFO:teuthology.orchestra.run.vm00.stdout: Installing : thrift-0.15.0-4.el9.x86_64 1/140 2026-03-20T18:25:31.792 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 2/140 2026-03-20T18:25:31.817 INFO:teuthology.orchestra.run.vm05.stdout:(138/138): ceph-test-20.2.0-712.g70f8415b.el9.x 1.9 MB/s | 84 MB 00:43 2026-03-20T18:25:31.823 INFO:teuthology.orchestra.run.vm05.stdout:-------------------------------------------------------------------------------- 2026-03-20T18:25:31.823 INFO:teuthology.orchestra.run.vm05.stdout:Total 5.6 MB/s | 267 MB 00:47 2026-03-20T18:25:31.828 INFO:teuthology.orchestra.run.vm00.stdout: Installing : liboath-2.6.12-1.el9.x86_64 3/140 2026-03-20T18:25:32.020 INFO:teuthology.orchestra.run.vm00.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 4/140 2026-03-20T18:25:32.021 INFO:teuthology.orchestra.run.vm00.stdout: Upgrading : librados2-2:20.2.0-712.g70f8415b.el9.x86_64 5/140 2026-03-20T18:25:32.058 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librados2-2:20.2.0-712.g70f8415b.el9.x86_64 5/140 2026-03-20T18:25:32.070 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 6/140 2026-03-20T18:25:32.074 INFO:teuthology.orchestra.run.vm00.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/140 2026-03-20T18:25:32.080 INFO:teuthology.orchestra.run.vm00.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/140 2026-03-20T18:25:32.083 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 9/140 2026-03-20T18:25:32.088 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 10/140 2026-03-20T18:25:32.249 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 11/140 2026-03-20T18:25:32.252 INFO:teuthology.orchestra.run.vm00.stdout: Upgrading : librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T18:25:32.274 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T18:25:32.275 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 13/140 2026-03-20T18:25:32.301 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 13/140 2026-03-20T18:25:32.302 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T18:25:32.315 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T18:25:32.334 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T18:25:32.339 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T18:25:32.356 INFO:teuthology.orchestra.run.vm00.stdout: Installing : re2-1:20211101-20.el9.x86_64 15/140 2026-03-20T18:25:32.361 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T18:25:32.374 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 91/140 2026-03-20T18:25:32.383 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-packaging-20.9-5.el9.noarch 92/140 2026-03-20T18:25:32.385 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 16/140 2026-03-20T18:25:32.398 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/140 2026-03-20T18:25:32.402 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-ply-3.11-14.el9.noarch 93/140 2026-03-20T18:25:32.405 INFO:teuthology.orchestra.run.vm00.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 18/140 2026-03-20T18:25:32.409 INFO:teuthology.orchestra.run.vm00.stdout: Installing : lua-5.4.4-4.el9.x86_64 19/140 2026-03-20T18:25:32.415 INFO:teuthology.orchestra.run.vm00.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 20/140 2026-03-20T18:25:32.427 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 94/140 2026-03-20T18:25:32.451 INFO:teuthology.orchestra.run.vm00.stdout: Installing : unzip-6.0-59.el9.x86_64 21/140 2026-03-20T18:25:32.457 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-20T18:25:32.469 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 22/140 2026-03-20T18:25:32.474 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 23/140 2026-03-20T18:25:32.481 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 24/140 2026-03-20T18:25:32.484 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 25/140 2026-03-20T18:25:32.520 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-20T18:25:32.520 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-20T18:25:32.524 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 95/140 2026-03-20T18:25:32.525 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 26/140 2026-03-20T18:25:32.532 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x 27/140 2026-03-20T18:25:32.535 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9 28/140 2026-03-20T18:25:32.536 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 29/140 2026-03-20T18:25:32.540 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 96/140 2026-03-20T18:25:32.575 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 97/140 2026-03-20T18:25:32.590 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 29/140 2026-03-20T18:25:32.591 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 30/140 2026-03-20T18:25:32.613 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 30/140 2026-03-20T18:25:32.616 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 98/140 2026-03-20T18:25:32.632 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 31/140 2026-03-20T18:25:32.642 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/140 2026-03-20T18:25:32.675 INFO:teuthology.orchestra.run.vm00.stdout: Installing : zip-3.0-35.el9.x86_64 33/140 2026-03-20T18:25:32.686 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 99/140 2026-03-20T18:25:32.686 INFO:teuthology.orchestra.run.vm00.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/140 2026-03-20T18:25:32.711 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 100/140 2026-03-20T18:25:32.712 INFO:teuthology.orchestra.run.vm00.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/140 2026-03-20T18:25:32.717 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 101/140 2026-03-20T18:25:32.724 INFO:teuthology.orchestra.run.vm02.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 102/140 2026-03-20T18:25:32.729 INFO:teuthology.orchestra.run.vm02.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 103/140 2026-03-20T18:25:32.731 INFO:teuthology.orchestra.run.vm02.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 104/140 2026-03-20T18:25:32.753 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 104/140 2026-03-20T18:25:32.775 INFO:teuthology.orchestra.run.vm00.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/140 2026-03-20T18:25:32.795 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 37/140 2026-03-20T18:25:32.815 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-rsa-4.9-2.el9.noarch 38/140 2026-03-20T18:25:32.822 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 39/140 2026-03-20T18:25:32.833 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/140 2026-03-20T18:25:32.840 INFO:teuthology.orchestra.run.vm00.stdout: Installing : librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 41/140 2026-03-20T18:25:32.845 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/140 2026-03-20T18:25:32.863 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/140 2026-03-20T18:25:32.871 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/140 2026-03-20T18:25:32.880 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 45/140 2026-03-20T18:25:32.898 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 46/140 2026-03-20T18:25:32.915 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 47/140 2026-03-20T18:25:32.924 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 48/140 2026-03-20T18:25:32.937 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 49/140 2026-03-20T18:25:33.000 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 50/140 2026-03-20T18:25:33.105 INFO:teuthology.orchestra.run.vm02.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 105/140 2026-03-20T18:25:33.112 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 106/140 2026-03-20T18:25:33.160 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 106/140 2026-03-20T18:25:33.160 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-20T18:25:33.160 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-20T18:25:33.160 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:25:33.165 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 107/140 2026-03-20T18:25:33.432 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 51/140 2026-03-20T18:25:33.449 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 52/140 2026-03-20T18:25:33.455 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 53/140 2026-03-20T18:25:33.462 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 54/140 2026-03-20T18:25:33.467 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 55/140 2026-03-20T18:25:33.474 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 56/140 2026-03-20T18:25:33.479 INFO:teuthology.orchestra.run.vm00.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 57/140 2026-03-20T18:25:33.480 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 58/140 2026-03-20T18:25:33.516 INFO:teuthology.orchestra.run.vm00.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 59/140 2026-03-20T18:25:33.574 INFO:teuthology.orchestra.run.vm00.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 60/140 2026-03-20T18:25:33.588 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 61/140 2026-03-20T18:25:33.598 INFO:teuthology.orchestra.run.vm00.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 62/140 2026-03-20T18:25:33.605 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 63/140 2026-03-20T18:25:33.614 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 64/140 2026-03-20T18:25:33.617 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-20T18:25:33.617 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-20T18:25:33.622 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 65/140 2026-03-20T18:25:33.632 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 66/140 2026-03-20T18:25:33.639 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 67/140 2026-03-20T18:25:33.678 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 68/140 2026-03-20T18:25:33.692 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 69/140 2026-03-20T18:25:33.700 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 70/140 2026-03-20T18:25:33.710 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 71/140 2026-03-20T18:25:33.754 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 72/140 2026-03-20T18:25:34.039 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 73/140 2026-03-20T18:25:34.073 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 74/140 2026-03-20T18:25:34.077 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 75/140 2026-03-20T18:25:34.081 INFO:teuthology.orchestra.run.vm00.stdout: Installing : perl-Benchmark-1.23-483.el9.noarch 76/140 2026-03-20T18:25:34.156 INFO:teuthology.orchestra.run.vm00.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/140 2026-03-20T18:25:34.159 INFO:teuthology.orchestra.run.vm00.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/140 2026-03-20T18:25:34.192 INFO:teuthology.orchestra.run.vm00.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/140 2026-03-20T18:25:34.619 INFO:teuthology.orchestra.run.vm00.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/140 2026-03-20T18:25:34.725 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/140 2026-03-20T18:25:34.807 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-20T18:25:34.816 INFO:teuthology.orchestra.run.vm05.stdout: Installing : thrift-0.15.0-4.el9.x86_64 1/140 2026-03-20T18:25:34.821 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 2/140 2026-03-20T18:25:34.834 INFO:teuthology.orchestra.run.vm05.stdout: Installing : liboath-2.6.12-1.el9.x86_64 3/140 2026-03-20T18:25:35.017 INFO:teuthology.orchestra.run.vm05.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 4/140 2026-03-20T18:25:35.019 INFO:teuthology.orchestra.run.vm05.stdout: Upgrading : librados2-2:20.2.0-712.g70f8415b.el9.x86_64 5/140 2026-03-20T18:25:35.057 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librados2-2:20.2.0-712.g70f8415b.el9.x86_64 5/140 2026-03-20T18:25:35.069 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 6/140 2026-03-20T18:25:35.074 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/140 2026-03-20T18:25:35.079 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/140 2026-03-20T18:25:35.082 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 9/140 2026-03-20T18:25:35.090 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 10/140 2026-03-20T18:25:35.239 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 11/140 2026-03-20T18:25:35.242 INFO:teuthology.orchestra.run.vm05.stdout: Upgrading : librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T18:25:35.265 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T18:25:35.268 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 13/140 2026-03-20T18:25:35.296 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 13/140 2026-03-20T18:25:35.297 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T18:25:35.318 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T18:25:35.360 INFO:teuthology.orchestra.run.vm05.stdout: Installing : re2-1:20211101-20.el9.x86_64 15/140 2026-03-20T18:25:35.388 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 16/140 2026-03-20T18:25:35.401 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/140 2026-03-20T18:25:35.409 INFO:teuthology.orchestra.run.vm05.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 18/140 2026-03-20T18:25:35.413 INFO:teuthology.orchestra.run.vm05.stdout: Installing : lua-5.4.4-4.el9.x86_64 19/140 2026-03-20T18:25:35.419 INFO:teuthology.orchestra.run.vm05.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 20/140 2026-03-20T18:25:35.449 INFO:teuthology.orchestra.run.vm05.stdout: Installing : unzip-6.0-59.el9.x86_64 21/140 2026-03-20T18:25:35.467 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 22/140 2026-03-20T18:25:35.473 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 23/140 2026-03-20T18:25:35.481 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 24/140 2026-03-20T18:25:35.484 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 25/140 2026-03-20T18:25:35.525 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 26/140 2026-03-20T18:25:35.534 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x 27/140 2026-03-20T18:25:35.537 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9 28/140 2026-03-20T18:25:35.538 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 29/140 2026-03-20T18:25:35.600 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 29/140 2026-03-20T18:25:35.602 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 30/140 2026-03-20T18:25:35.609 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/140 2026-03-20T18:25:35.628 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 30/140 2026-03-20T18:25:35.641 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/140 2026-03-20T18:25:35.643 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 31/140 2026-03-20T18:25:35.647 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/140 2026-03-20T18:25:35.650 INFO:teuthology.orchestra.run.vm00.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/140 2026-03-20T18:25:35.651 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/140 2026-03-20T18:25:35.658 INFO:teuthology.orchestra.run.vm00.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 86/140 2026-03-20T18:25:35.682 INFO:teuthology.orchestra.run.vm05.stdout: Installing : zip-3.0-35.el9.x86_64 33/140 2026-03-20T18:25:35.689 INFO:teuthology.orchestra.run.vm05.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/140 2026-03-20T18:25:35.698 INFO:teuthology.orchestra.run.vm05.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/140 2026-03-20T18:25:35.761 INFO:teuthology.orchestra.run.vm05.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/140 2026-03-20T18:25:35.779 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 37/140 2026-03-20T18:25:35.799 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rsa-4.9-2.el9.noarch 38/140 2026-03-20T18:25:35.806 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 39/140 2026-03-20T18:25:35.816 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/140 2026-03-20T18:25:35.823 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 41/140 2026-03-20T18:25:35.827 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/140 2026-03-20T18:25:35.846 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/140 2026-03-20T18:25:35.852 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/140 2026-03-20T18:25:35.860 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 45/140 2026-03-20T18:25:35.875 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 46/140 2026-03-20T18:25:35.894 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 47/140 2026-03-20T18:25:35.934 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 48/140 2026-03-20T18:25:35.953 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 49/140 2026-03-20T18:25:35.996 INFO:teuthology.orchestra.run.vm00.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 87/140 2026-03-20T18:25:35.998 INFO:teuthology.orchestra.run.vm00.stdout: Installing : librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 88/140 2026-03-20T18:25:36.015 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 50/140 2026-03-20T18:25:36.021 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 88/140 2026-03-20T18:25:36.022 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 89/140 2026-03-20T18:25:36.438 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 51/140 2026-03-20T18:25:36.456 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 52/140 2026-03-20T18:25:36.464 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 53/140 2026-03-20T18:25:36.471 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 54/140 2026-03-20T18:25:36.476 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 55/140 2026-03-20T18:25:36.483 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 56/140 2026-03-20T18:25:36.488 INFO:teuthology.orchestra.run.vm05.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 57/140 2026-03-20T18:25:36.490 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 58/140 2026-03-20T18:25:36.523 INFO:teuthology.orchestra.run.vm05.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 59/140 2026-03-20T18:25:36.582 INFO:teuthology.orchestra.run.vm05.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 60/140 2026-03-20T18:25:36.600 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 61/140 2026-03-20T18:25:36.610 INFO:teuthology.orchestra.run.vm05.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 62/140 2026-03-20T18:25:36.617 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 63/140 2026-03-20T18:25:36.625 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 64/140 2026-03-20T18:25:36.633 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 65/140 2026-03-20T18:25:36.642 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 66/140 2026-03-20T18:25:36.649 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 67/140 2026-03-20T18:25:36.690 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 68/140 2026-03-20T18:25:36.706 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 69/140 2026-03-20T18:25:36.714 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 70/140 2026-03-20T18:25:36.725 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 71/140 2026-03-20T18:25:36.771 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 72/140 2026-03-20T18:25:37.069 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 73/140 2026-03-20T18:25:37.102 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 74/140 2026-03-20T18:25:37.106 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 75/140 2026-03-20T18:25:37.110 INFO:teuthology.orchestra.run.vm05.stdout: Installing : perl-Benchmark-1.23-483.el9.noarch 76/140 2026-03-20T18:25:37.177 INFO:teuthology.orchestra.run.vm05.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/140 2026-03-20T18:25:37.180 INFO:teuthology.orchestra.run.vm05.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/140 2026-03-20T18:25:37.205 INFO:teuthology.orchestra.run.vm05.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/140 2026-03-20T18:25:37.349 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T18:25:37.356 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T18:25:37.384 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T18:25:37.397 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 91/140 2026-03-20T18:25:37.407 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-packaging-20.9-5.el9.noarch 92/140 2026-03-20T18:25:37.427 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-ply-3.11-14.el9.noarch 93/140 2026-03-20T18:25:37.449 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 94/140 2026-03-20T18:25:37.544 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 95/140 2026-03-20T18:25:37.559 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 96/140 2026-03-20T18:25:37.591 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 97/140 2026-03-20T18:25:37.619 INFO:teuthology.orchestra.run.vm05.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/140 2026-03-20T18:25:37.637 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 98/140 2026-03-20T18:25:37.702 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 99/140 2026-03-20T18:25:37.713 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 100/140 2026-03-20T18:25:37.719 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 101/140 2026-03-20T18:25:37.726 INFO:teuthology.orchestra.run.vm00.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 102/140 2026-03-20T18:25:37.729 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/140 2026-03-20T18:25:37.730 INFO:teuthology.orchestra.run.vm00.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 103/140 2026-03-20T18:25:37.732 INFO:teuthology.orchestra.run.vm00.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 104/140 2026-03-20T18:25:37.753 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 104/140 2026-03-20T18:25:38.126 INFO:teuthology.orchestra.run.vm00.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 105/140 2026-03-20T18:25:38.135 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 106/140 2026-03-20T18:25:38.186 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 106/140 2026-03-20T18:25:38.186 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-20T18:25:38.186 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-20T18:25:38.186 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:25:38.193 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 107/140 2026-03-20T18:25:38.723 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/140 2026-03-20T18:25:38.790 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/140 2026-03-20T18:25:38.907 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/140 2026-03-20T18:25:39.037 INFO:teuthology.orchestra.run.vm05.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/140 2026-03-20T18:25:39.128 INFO:teuthology.orchestra.run.vm05.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 86/140 2026-03-20T18:25:39.500 INFO:teuthology.orchestra.run.vm05.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 87/140 2026-03-20T18:25:39.503 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 88/140 2026-03-20T18:25:39.534 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 88/140 2026-03-20T18:25:39.535 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 89/140 2026-03-20T18:25:39.890 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 107/140 2026-03-20T18:25:39.890 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /sys 2026-03-20T18:25:39.890 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /proc 2026-03-20T18:25:39.890 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /mnt 2026-03-20T18:25:39.890 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /var/tmp 2026-03-20T18:25:39.890 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /home 2026-03-20T18:25:39.890 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /root 2026-03-20T18:25:39.890 INFO:teuthology.orchestra.run.vm02.stdout:skipping the directory /tmp 2026-03-20T18:25:39.890 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:25:40.034 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 108/140 2026-03-20T18:25:40.072 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 108/140 2026-03-20T18:25:40.072 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:40.072 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-20T18:25:40.072 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-20T18:25:40.072 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-20T18:25:40.072 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:25:40.427 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 109/140 2026-03-20T18:25:40.454 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 109/140 2026-03-20T18:25:40.454 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:40.454 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-20T18:25:40.454 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-20T18:25:40.454 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-20T18:25:40.454 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:25:40.540 INFO:teuthology.orchestra.run.vm02.stdout: Installing : mailcap-2.1.49-5.el9.noarch 110/140 2026-03-20T18:25:40.568 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 111/140 2026-03-20T18:25:40.592 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T18:25:40.592 INFO:teuthology.orchestra.run.vm02.stdout:Creating group 'qat' with GID 994. 2026-03-20T18:25:40.592 INFO:teuthology.orchestra.run.vm02.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-20T18:25:40.592 INFO:teuthology.orchestra.run.vm02.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-20T18:25:40.592 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:25:40.604 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T18:25:40.640 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T18:25:40.640 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-20T18:25:40.640 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:25:40.666 INFO:teuthology.orchestra.run.vm02.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 113/140 2026-03-20T18:25:40.700 INFO:teuthology.orchestra.run.vm02.stdout: Installing : fuse-2.9.9-17.el9.x86_64 114/140 2026-03-20T18:25:40.788 INFO:teuthology.orchestra.run.vm02.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/140 2026-03-20T18:25:40.794 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 116/140 2026-03-20T18:25:40.814 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 116/140 2026-03-20T18:25:40.814 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:40.814 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-20T18:25:40.814 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:25:40.936 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T18:25:40.941 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T18:25:40.969 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T18:25:40.987 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 91/140 2026-03-20T18:25:40.998 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-packaging-20.9-5.el9.noarch 92/140 2026-03-20T18:25:41.023 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-ply-3.11-14.el9.noarch 93/140 2026-03-20T18:25:41.051 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 94/140 2026-03-20T18:25:41.164 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 95/140 2026-03-20T18:25:41.179 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 96/140 2026-03-20T18:25:41.215 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 97/140 2026-03-20T18:25:41.262 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 98/140 2026-03-20T18:25:41.329 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 99/140 2026-03-20T18:25:41.341 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 100/140 2026-03-20T18:25:41.347 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 101/140 2026-03-20T18:25:41.354 INFO:teuthology.orchestra.run.vm05.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 102/140 2026-03-20T18:25:41.358 INFO:teuthology.orchestra.run.vm05.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 103/140 2026-03-20T18:25:41.361 INFO:teuthology.orchestra.run.vm05.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 104/140 2026-03-20T18:25:41.384 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 104/140 2026-03-20T18:25:41.677 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 117/140 2026-03-20T18:25:41.705 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 117/140 2026-03-20T18:25:41.705 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:41.705 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-20T18:25:41.705 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-20T18:25:41.705 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-20T18:25:41.705 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:25:41.736 INFO:teuthology.orchestra.run.vm05.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 105/140 2026-03-20T18:25:41.742 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 106/140 2026-03-20T18:25:41.785 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: cephadm-2:20.2.0-712.g70f8415b.el9.noarch 118/140 2026-03-20T18:25:41.789 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 106/140 2026-03-20T18:25:41.789 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-20T18:25:41.789 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-20T18:25:41.789 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:25:41.789 INFO:teuthology.orchestra.run.vm02.stdout: Installing : cephadm-2:20.2.0-712.g70f8415b.el9.noarch 118/140 2026-03-20T18:25:41.796 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 107/140 2026-03-20T18:25:41.799 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el 119/140 2026-03-20T18:25:41.829 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.e 120/140 2026-03-20T18:25:41.832 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 121/140 2026-03-20T18:25:43.221 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 121/140 2026-03-20T18:25:43.233 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 122/140 2026-03-20T18:25:43.821 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 122/140 2026-03-20T18:25:43.824 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 123/140 2026-03-20T18:25:43.897 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 123/140 2026-03-20T18:25:43.954 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9 124/140 2026-03-20T18:25:43.957 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 125/140 2026-03-20T18:25:43.982 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 125/140 2026-03-20T18:25:43.982 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:43.982 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-20T18:25:43.982 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-20T18:25:43.982 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-20T18:25:43.982 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:25:44.002 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 126/140 2026-03-20T18:25:44.019 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 126/140 2026-03-20T18:25:44.067 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 127/140 2026-03-20T18:25:45.138 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 107/140 2026-03-20T18:25:45.138 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /sys 2026-03-20T18:25:45.139 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /proc 2026-03-20T18:25:45.139 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /mnt 2026-03-20T18:25:45.139 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /var/tmp 2026-03-20T18:25:45.139 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /home 2026-03-20T18:25:45.139 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /root 2026-03-20T18:25:45.139 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /tmp 2026-03-20T18:25:45.139 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:25:45.275 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 108/140 2026-03-20T18:25:45.303 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 108/140 2026-03-20T18:25:45.303 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:45.303 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-20T18:25:45.303 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-20T18:25:45.303 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-20T18:25:45.303 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:25:45.337 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 128/140 2026-03-20T18:25:45.341 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 129/140 2026-03-20T18:25:45.369 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 129/140 2026-03-20T18:25:45.369 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:45.369 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-20T18:25:45.369 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-20T18:25:45.369 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-20T18:25:45.369 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:25:45.383 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-immutable-object-cache-2:20.2.0-712.g70f841 130/140 2026-03-20T18:25:45.413 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-712.g70f841 130/140 2026-03-20T18:25:45.413 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:45.413 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-20T18:25:45.413 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:25:45.563 INFO:teuthology.orchestra.run.vm02.stdout: Installing : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 131/140 2026-03-20T18:25:45.587 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 131/140 2026-03-20T18:25:45.587 INFO:teuthology.orchestra.run.vm02.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:45.587 INFO:teuthology.orchestra.run.vm02.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-20T18:25:45.587 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-20T18:25:45.587 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-20T18:25:45.587 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:25:45.593 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 109/140 2026-03-20T18:25:45.615 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 109/140 2026-03-20T18:25:45.615 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:45.615 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-20T18:25:45.615 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-20T18:25:45.615 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-20T18:25:45.616 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:25:45.623 INFO:teuthology.orchestra.run.vm00.stdout: Installing : mailcap-2.1.49-5.el9.noarch 110/140 2026-03-20T18:25:45.626 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 111/140 2026-03-20T18:25:45.644 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T18:25:45.644 INFO:teuthology.orchestra.run.vm00.stdout:Creating group 'qat' with GID 994. 2026-03-20T18:25:45.644 INFO:teuthology.orchestra.run.vm00.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-20T18:25:45.644 INFO:teuthology.orchestra.run.vm00.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-20T18:25:45.644 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:25:45.654 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T18:25:45.683 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T18:25:45.683 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-20T18:25:45.683 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:25:45.707 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 113/140 2026-03-20T18:25:45.735 INFO:teuthology.orchestra.run.vm00.stdout: Installing : fuse-2.9.9-17.el9.x86_64 114/140 2026-03-20T18:25:45.812 INFO:teuthology.orchestra.run.vm00.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/140 2026-03-20T18:25:45.817 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 116/140 2026-03-20T18:25:45.834 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 116/140 2026-03-20T18:25:45.834 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:45.834 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-20T18:25:45.834 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:25:46.652 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 117/140 2026-03-20T18:25:46.679 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 117/140 2026-03-20T18:25:46.679 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:46.679 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-20T18:25:46.679 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-20T18:25:46.679 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-20T18:25:46.679 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:25:46.757 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: cephadm-2:20.2.0-712.g70f8415b.el9.noarch 118/140 2026-03-20T18:25:46.761 INFO:teuthology.orchestra.run.vm00.stdout: Installing : cephadm-2:20.2.0-712.g70f8415b.el9.noarch 118/140 2026-03-20T18:25:46.769 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el 119/140 2026-03-20T18:25:46.798 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.e 120/140 2026-03-20T18:25:46.802 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 121/140 2026-03-20T18:25:48.242 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 121/140 2026-03-20T18:25:48.254 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 122/140 2026-03-20T18:25:48.703 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 107/140 2026-03-20T18:25:48.704 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /sys 2026-03-20T18:25:48.704 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /proc 2026-03-20T18:25:48.704 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /mnt 2026-03-20T18:25:48.704 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /var/tmp 2026-03-20T18:25:48.704 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /home 2026-03-20T18:25:48.704 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /root 2026-03-20T18:25:48.704 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /tmp 2026-03-20T18:25:48.704 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:25:48.843 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 108/140 2026-03-20T18:25:48.876 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 108/140 2026-03-20T18:25:48.876 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:48.876 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-20T18:25:48.876 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-20T18:25:48.876 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-20T18:25:48.876 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:25:48.885 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 122/140 2026-03-20T18:25:48.889 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 123/140 2026-03-20T18:25:48.959 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 123/140 2026-03-20T18:25:49.072 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9 124/140 2026-03-20T18:25:49.118 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 125/140 2026-03-20T18:25:49.144 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 125/140 2026-03-20T18:25:49.144 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:49.144 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-20T18:25:49.144 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-20T18:25:49.144 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-20T18:25:49.144 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:25:49.232 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 109/140 2026-03-20T18:25:49.249 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 126/140 2026-03-20T18:25:49.261 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 109/140 2026-03-20T18:25:49.262 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:49.262 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-20T18:25:49.262 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-20T18:25:49.262 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-20T18:25:49.262 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:25:49.262 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 126/140 2026-03-20T18:25:49.392 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 127/140 2026-03-20T18:25:49.516 INFO:teuthology.orchestra.run.vm05.stdout: Installing : mailcap-2.1.49-5.el9.noarch 110/140 2026-03-20T18:25:49.527 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 111/140 2026-03-20T18:25:49.569 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T18:25:49.570 INFO:teuthology.orchestra.run.vm05.stdout:Creating group 'qat' with GID 994. 2026-03-20T18:25:49.570 INFO:teuthology.orchestra.run.vm05.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-20T18:25:49.570 INFO:teuthology.orchestra.run.vm05.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-20T18:25:49.570 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:25:49.643 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T18:25:49.692 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T18:25:49.692 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-20T18:25:49.692 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:25:49.760 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 113/140 2026-03-20T18:25:49.829 INFO:teuthology.orchestra.run.vm05.stdout: Installing : fuse-2.9.9-17.el9.x86_64 114/140 2026-03-20T18:25:50.004 INFO:teuthology.orchestra.run.vm05.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/140 2026-03-20T18:25:50.009 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 116/140 2026-03-20T18:25:50.028 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 116/140 2026-03-20T18:25:50.028 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:50.028 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-20T18:25:50.028 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:25:50.239 INFO:teuthology.orchestra.run.vm02.stdout: Installing : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 132/140 2026-03-20T18:25:50.248 INFO:teuthology.orchestra.run.vm02.stdout: Installing : perl-Test-Harness-1:3.42-461.el9.noarch 133/140 2026-03-20T18:25:50.255 INFO:teuthology.orchestra.run.vm02.stdout: Installing : libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_6 134/140 2026-03-20T18:25:50.268 INFO:teuthology.orchestra.run.vm02.stdout: Installing : rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 135/140 2026-03-20T18:25:50.291 INFO:teuthology.orchestra.run.vm02.stdout: Installing : rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 136/140 2026-03-20T18:25:50.301 INFO:teuthology.orchestra.run.vm02.stdout: Installing : s3cmd-2.4.0-1.el9.noarch 137/140 2026-03-20T18:25:50.306 INFO:teuthology.orchestra.run.vm02.stdout: Installing : bzip2-1.0.8-11.el9.x86_64 138/140 2026-03-20T18:25:50.306 INFO:teuthology.orchestra.run.vm02.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 139/140 2026-03-20T18:25:50.327 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 139/140 2026-03-20T18:25:50.327 INFO:teuthology.orchestra.run.vm02.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T18:25:50.825 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 128/140 2026-03-20T18:25:50.832 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 129/140 2026-03-20T18:25:50.861 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 129/140 2026-03-20T18:25:50.861 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:50.861 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-20T18:25:50.861 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-20T18:25:50.861 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-20T18:25:50.861 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:25:50.875 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-immutable-object-cache-2:20.2.0-712.g70f841 130/140 2026-03-20T18:25:50.903 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-712.g70f841 130/140 2026-03-20T18:25:50.903 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:50.903 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-20T18:25:50.903 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:25:50.941 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 117/140 2026-03-20T18:25:50.972 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 117/140 2026-03-20T18:25:50.973 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:50.973 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-20T18:25:50.973 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-20T18:25:50.973 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-20T18:25:50.973 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:25:51.063 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: cephadm-2:20.2.0-712.g70f8415b.el9.noarch 118/140 2026-03-20T18:25:51.067 INFO:teuthology.orchestra.run.vm05.stdout: Installing : cephadm-2:20.2.0-712.g70f8415b.el9.noarch 118/140 2026-03-20T18:25:51.076 INFO:teuthology.orchestra.run.vm00.stdout: Installing : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 131/140 2026-03-20T18:25:51.077 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el 119/140 2026-03-20T18:25:51.102 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 131/140 2026-03-20T18:25:51.103 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:51.103 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-20T18:25:51.103 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-20T18:25:51.103 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-20T18:25:51.103 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:25:51.112 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.e 120/140 2026-03-20T18:25:51.165 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 121/140 2026-03-20T18:25:52.076 INFO:teuthology.orchestra.run.vm02.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T18:25:52.076 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 1/140 2026-03-20T18:25:52.076 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 2/140 2026-03-20T18:25:52.076 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 3/140 2026-03-20T18:25:52.076 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 4/140 2026-03-20T18:25:52.076 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-immutable-object-cache-2:20.2.0-712.g70f841 5/140 2026-03-20T18:25:52.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 6/140 2026-03-20T18:25:52.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 7/140 2026-03-20T18:25:52.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/140 2026-03-20T18:25:52.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 9/140 2026-03-20T18:25:52.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 10/140 2026-03-20T18:25:52.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 11/140 2026-03-20T18:25:52.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T18:25:52.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_6 13/140 2026-03-20T18:25:52.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T18:25:52.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 15/140 2026-03-20T18:25:52.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 16/140 2026-03-20T18:25:52.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 17/140 2026-03-20T18:25:52.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 18/140 2026-03-20T18:25:52.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 19/140 2026-03-20T18:25:52.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9 20/140 2026-03-20T18:25:52.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x 21/140 2026-03-20T18:25:52.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 22/140 2026-03-20T18:25:52.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 23/140 2026-03-20T18:25:52.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 24/140 2026-03-20T18:25:52.077 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 25/140 2026-03-20T18:25:52.078 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 26/140 2026-03-20T18:25:52.078 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 27/140 2026-03-20T18:25:52.078 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 28/140 2026-03-20T18:25:52.078 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.e 29/140 2026-03-20T18:25:52.078 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 30/140 2026-03-20T18:25:52.078 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 31/140 2026-03-20T18:25:52.078 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 32/140 2026-03-20T18:25:52.078 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9 33/140 2026-03-20T18:25:52.078 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 34/140 2026-03-20T18:25:52.078 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el 35/140 2026-03-20T18:25:52.078 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 36/140 2026-03-20T18:25:52.078 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : cephadm-2:20.2.0-712.g70f8415b.el9.noarch 37/140 2026-03-20T18:25:52.078 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : bzip2-1.0.8-11.el9.x86_64 38/140 2026-03-20T18:25:52.078 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 39/140 2026-03-20T18:25:52.078 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : fuse-2.9.9-17.el9.x86_64 40/140 2026-03-20T18:25:52.079 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 41/140 2026-03-20T18:25:52.079 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 42/140 2026-03-20T18:25:52.079 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 43/140 2026-03-20T18:25:52.079 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 44/140 2026-03-20T18:25:52.079 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 45/140 2026-03-20T18:25:52.079 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 46/140 2026-03-20T18:25:52.079 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 47/140 2026-03-20T18:25:52.079 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/140 2026-03-20T18:25:52.079 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-ply-3.11-14.el9.noarch 49/140 2026-03-20T18:25:52.079 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 50/140 2026-03-20T18:25:52.079 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 51/140 2026-03-20T18:25:52.079 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 52/140 2026-03-20T18:25:52.079 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 53/140 2026-03-20T18:25:52.079 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : unzip-6.0-59.el9.x86_64 54/140 2026-03-20T18:25:52.079 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : zip-3.0-35.el9.x86_64 55/140 2026-03-20T18:25:52.079 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 56/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 57/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 58/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 59/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 60/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 61/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 62/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 63/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 64/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 65/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 66/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : lua-5.4.4-4.el9.x86_64 67/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 68/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 69/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : perl-Benchmark-1.23-483.el9.noarch 70/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : perl-Test-Harness-1:3.42-461.el9.noarch 71/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 72/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 73/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 74/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 75/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 76/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 77/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 78/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 79/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 80/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 81/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 82/140 2026-03-20T18:25:52.080 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 83/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 84/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 85/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 86/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 87/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 88/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 89/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 90/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 91/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 92/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 93/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 94/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 95/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 96/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 97/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 98/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 99/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 100/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 101/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 102/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 103/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 104/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 105/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 106/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 107/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 108/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 109/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 110/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 111/140 2026-03-20T18:25:52.081 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 112/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 113/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 114/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 115/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 116/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 117/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 118/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 119/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 120/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 121/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 122/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 123/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 124/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 125/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 126/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 127/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 128/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 129/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 130/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 131/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 132/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 133/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : re2-1:20211101-20.el9.x86_64 134/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : s3cmd-2.4.0-1.el9.noarch 135/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 136/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librados2-2:20.2.0-712.g70f8415b.el9.x86_64 137/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 138/140 2026-03-20T18:25:52.082 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 139/140 2026-03-20T18:25:52.186 INFO:teuthology.orchestra.run.vm02.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T18:25:52.186 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:25:52.186 INFO:teuthology.orchestra.run.vm02.stdout:Upgraded: 2026-03-20T18:25:52.186 INFO:teuthology.orchestra.run.vm02.stdout: librados2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.186 INFO:teuthology.orchestra.run.vm02.stdout: librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.186 INFO:teuthology.orchestra.run.vm02.stdout:Installed: 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: bzip2-1.0.8-11.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ceph-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ceph-immutable-object-cache-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: cephadm-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: fuse-2.9.9-17.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: lua-5.4.4-4.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-20T18:25:52.187 INFO:teuthology.orchestra.run.vm02.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: perl-Benchmark-1.23-483.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: perl-Test-Harness-1:3.42-461.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-ply-3.11-14.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyparsing-2.4.7-9.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-20T18:25:52.188 INFO:teuthology.orchestra.run.vm02.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: re2-1:20211101-20.el9.x86_64 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: s3cmd-2.4.0-1.el9.noarch 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: unzip-6.0-59.el9.x86_64 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: zip-3.0-35.el9.x86_64 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:25:52.189 INFO:teuthology.orchestra.run.vm02.stdout:Complete! 2026-03-20T18:25:52.289 DEBUG:teuthology.parallel:result is None 2026-03-20T18:25:52.539 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 121/140 2026-03-20T18:25:52.551 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 122/140 2026-03-20T18:25:53.110 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 122/140 2026-03-20T18:25:53.113 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 123/140 2026-03-20T18:25:53.181 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 123/140 2026-03-20T18:25:53.234 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9 124/140 2026-03-20T18:25:53.237 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 125/140 2026-03-20T18:25:53.265 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 125/140 2026-03-20T18:25:53.265 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:53.265 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-20T18:25:53.265 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-20T18:25:53.265 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-20T18:25:53.265 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:25:53.280 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 126/140 2026-03-20T18:25:53.293 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 126/140 2026-03-20T18:25:53.342 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 127/140 2026-03-20T18:25:54.561 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 128/140 2026-03-20T18:25:54.565 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 129/140 2026-03-20T18:25:54.590 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 129/140 2026-03-20T18:25:54.590 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:54.590 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-20T18:25:54.590 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-20T18:25:54.590 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-20T18:25:54.590 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:25:54.603 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-immutable-object-cache-2:20.2.0-712.g70f841 130/140 2026-03-20T18:25:54.627 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-712.g70f841 130/140 2026-03-20T18:25:54.627 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:54.627 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-20T18:25:54.627 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:25:54.782 INFO:teuthology.orchestra.run.vm05.stdout: Installing : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 131/140 2026-03-20T18:25:54.807 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 131/140 2026-03-20T18:25:54.807 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T18:25:54.807 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-20T18:25:54.807 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-20T18:25:54.807 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-20T18:25:54.807 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:25:55.622 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 132/140 2026-03-20T18:25:55.631 INFO:teuthology.orchestra.run.vm00.stdout: Installing : perl-Test-Harness-1:3.42-461.el9.noarch 133/140 2026-03-20T18:25:55.638 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_6 134/140 2026-03-20T18:25:55.651 INFO:teuthology.orchestra.run.vm00.stdout: Installing : rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 135/140 2026-03-20T18:25:55.671 INFO:teuthology.orchestra.run.vm00.stdout: Installing : rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 136/140 2026-03-20T18:25:55.680 INFO:teuthology.orchestra.run.vm00.stdout: Installing : s3cmd-2.4.0-1.el9.noarch 137/140 2026-03-20T18:25:55.684 INFO:teuthology.orchestra.run.vm00.stdout: Installing : bzip2-1.0.8-11.el9.x86_64 138/140 2026-03-20T18:25:55.684 INFO:teuthology.orchestra.run.vm00.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 139/140 2026-03-20T18:25:55.701 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 139/140 2026-03-20T18:25:55.701 INFO:teuthology.orchestra.run.vm00.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T18:25:57.251 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 1/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 2/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 3/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 4/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-immutable-object-cache-2:20.2.0-712.g70f841 5/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 6/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 7/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 9/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 10/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 11/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_6 13/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 15/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 16/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 17/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 18/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 19/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9 20/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x 21/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 22/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 23/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 24/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 25/140 2026-03-20T18:25:57.252 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 26/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 27/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 28/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.e 29/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 30/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 31/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 32/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9 33/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 34/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el 35/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 36/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : cephadm-2:20.2.0-712.g70f8415b.el9.noarch 37/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : bzip2-1.0.8-11.el9.x86_64 38/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 39/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : fuse-2.9.9-17.el9.x86_64 40/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 41/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 42/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 43/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 44/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 45/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 46/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 47/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ply-3.11-14.el9.noarch 49/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 50/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 51/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 52/140 2026-03-20T18:25:57.253 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 53/140 2026-03-20T18:25:57.254 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : unzip-6.0-59.el9.x86_64 54/140 2026-03-20T18:25:57.254 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : zip-3.0-35.el9.x86_64 55/140 2026-03-20T18:25:57.254 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 56/140 2026-03-20T18:25:57.254 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 57/140 2026-03-20T18:25:57.254 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 58/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 59/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 60/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 61/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 62/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 63/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 64/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 65/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 66/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lua-5.4.4-4.el9.x86_64 67/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 68/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 69/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : perl-Benchmark-1.23-483.el9.noarch 70/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : perl-Test-Harness-1:3.42-461.el9.noarch 71/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 72/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 73/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 74/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 75/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 76/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 77/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 78/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 79/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 80/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 81/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 82/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 83/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 84/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 85/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 86/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 87/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 88/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 89/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 90/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 91/140 2026-03-20T18:25:57.255 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 92/140 2026-03-20T18:25:57.256 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 93/140 2026-03-20T18:25:57.256 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 94/140 2026-03-20T18:25:57.256 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 95/140 2026-03-20T18:25:57.256 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 96/140 2026-03-20T18:25:57.256 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 97/140 2026-03-20T18:25:57.256 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 98/140 2026-03-20T18:25:57.256 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 99/140 2026-03-20T18:25:57.256 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 100/140 2026-03-20T18:25:57.256 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 101/140 2026-03-20T18:25:57.256 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 102/140 2026-03-20T18:25:57.256 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 103/140 2026-03-20T18:25:57.256 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 104/140 2026-03-20T18:25:57.256 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 105/140 2026-03-20T18:25:57.256 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 106/140 2026-03-20T18:25:57.256 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 107/140 2026-03-20T18:25:57.256 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 108/140 2026-03-20T18:25:57.256 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 109/140 2026-03-20T18:25:57.256 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 110/140 2026-03-20T18:25:57.256 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 111/140 2026-03-20T18:25:57.256 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 112/140 2026-03-20T18:25:57.257 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 113/140 2026-03-20T18:25:57.257 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 114/140 2026-03-20T18:25:57.257 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 115/140 2026-03-20T18:25:57.257 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 116/140 2026-03-20T18:25:57.257 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 117/140 2026-03-20T18:25:57.257 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 118/140 2026-03-20T18:25:57.257 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 119/140 2026-03-20T18:25:57.257 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 120/140 2026-03-20T18:25:57.257 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 121/140 2026-03-20T18:25:57.257 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 122/140 2026-03-20T18:25:57.257 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 123/140 2026-03-20T18:25:57.257 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 124/140 2026-03-20T18:25:57.257 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 125/140 2026-03-20T18:25:57.257 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 126/140 2026-03-20T18:25:57.257 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 127/140 2026-03-20T18:25:57.257 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 128/140 2026-03-20T18:25:57.257 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 129/140 2026-03-20T18:25:57.258 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 130/140 2026-03-20T18:25:57.258 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 131/140 2026-03-20T18:25:57.258 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 132/140 2026-03-20T18:25:57.258 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 133/140 2026-03-20T18:25:57.258 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : re2-1:20211101-20.el9.x86_64 134/140 2026-03-20T18:25:57.258 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : s3cmd-2.4.0-1.el9.noarch 135/140 2026-03-20T18:25:57.258 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 136/140 2026-03-20T18:25:57.258 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librados2-2:20.2.0-712.g70f8415b.el9.x86_64 137/140 2026-03-20T18:25:57.258 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 138/140 2026-03-20T18:25:57.258 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 139/140 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout:Upgraded: 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: librados2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout:Installed: 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: bzip2-1.0.8-11.el9.x86_64 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: ceph-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: ceph-immutable-object-cache-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.364 INFO:teuthology.orchestra.run.vm00.stdout: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: cephadm-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: fuse-2.9.9-17.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: lua-5.4.4-4.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: perl-Benchmark-1.23-483.el9.noarch 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: perl-Test-Harness-1:3.42-461.el9.noarch 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-20T18:25:57.365 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-ply-3.11-14.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyparsing-2.4.7-9.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-20T18:25:57.366 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: re2-1:20211101-20.el9.x86_64 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: s3cmd-2.4.0-1.el9.noarch 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: unzip-6.0-59.el9.x86_64 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: zip-3.0-35.el9.x86_64 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:25:57.367 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-20T18:25:57.468 DEBUG:teuthology.parallel:result is None 2026-03-20T18:25:59.418 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 132/140 2026-03-20T18:25:59.427 INFO:teuthology.orchestra.run.vm05.stdout: Installing : perl-Test-Harness-1:3.42-461.el9.noarch 133/140 2026-03-20T18:25:59.436 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_6 134/140 2026-03-20T18:25:59.450 INFO:teuthology.orchestra.run.vm05.stdout: Installing : rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 135/140 2026-03-20T18:25:59.473 INFO:teuthology.orchestra.run.vm05.stdout: Installing : rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 136/140 2026-03-20T18:25:59.483 INFO:teuthology.orchestra.run.vm05.stdout: Installing : s3cmd-2.4.0-1.el9.noarch 137/140 2026-03-20T18:25:59.487 INFO:teuthology.orchestra.run.vm05.stdout: Installing : bzip2-1.0.8-11.el9.x86_64 138/140 2026-03-20T18:25:59.487 INFO:teuthology.orchestra.run.vm05.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 139/140 2026-03-20T18:25:59.508 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 139/140 2026-03-20T18:25:59.508 INFO:teuthology.orchestra.run.vm05.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 1/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 2/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 3/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 4/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-immutable-object-cache-2:20.2.0-712.g70f841 5/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 6/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 7/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 9/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 10/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 11/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_6 13/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 15/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 16/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 17/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 18/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 19/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9 20/140 2026-03-20T18:26:01.170 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x 21/140 2026-03-20T18:26:01.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 22/140 2026-03-20T18:26:01.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 23/140 2026-03-20T18:26:01.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 24/140 2026-03-20T18:26:01.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 25/140 2026-03-20T18:26:01.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 26/140 2026-03-20T18:26:01.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 27/140 2026-03-20T18:26:01.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 28/140 2026-03-20T18:26:01.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.e 29/140 2026-03-20T18:26:01.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 30/140 2026-03-20T18:26:01.171 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 31/140 2026-03-20T18:26:01.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 32/140 2026-03-20T18:26:01.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9 33/140 2026-03-20T18:26:01.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 34/140 2026-03-20T18:26:01.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el 35/140 2026-03-20T18:26:01.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 36/140 2026-03-20T18:26:01.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : cephadm-2:20.2.0-712.g70f8415b.el9.noarch 37/140 2026-03-20T18:26:01.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : bzip2-1.0.8-11.el9.x86_64 38/140 2026-03-20T18:26:01.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 39/140 2026-03-20T18:26:01.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : fuse-2.9.9-17.el9.x86_64 40/140 2026-03-20T18:26:01.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 41/140 2026-03-20T18:26:01.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 42/140 2026-03-20T18:26:01.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 43/140 2026-03-20T18:26:01.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 44/140 2026-03-20T18:26:01.174 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 45/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 46/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 47/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ply-3.11-14.el9.noarch 49/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 50/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 51/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 52/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 53/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : unzip-6.0-59.el9.x86_64 54/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : zip-3.0-35.el9.x86_64 55/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 56/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 57/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 58/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 59/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 60/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 61/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 62/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 63/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 64/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 65/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 66/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lua-5.4.4-4.el9.x86_64 67/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 68/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 69/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : perl-Benchmark-1.23-483.el9.noarch 70/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : perl-Test-Harness-1:3.42-461.el9.noarch 71/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 72/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 73/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 74/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 75/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 76/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 77/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 78/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 79/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 80/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 81/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 82/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 83/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 84/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 85/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 86/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 87/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 88/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 89/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 90/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 91/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 92/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 93/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 94/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 95/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 96/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 97/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 98/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 99/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 100/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 101/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 102/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 103/140 2026-03-20T18:26:01.175 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 104/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 105/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 106/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 107/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 108/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 109/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 110/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 111/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 112/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 113/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 114/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 115/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 116/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 117/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 118/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 119/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 120/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 121/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 122/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 123/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 124/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 125/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 126/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 127/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 128/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 129/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 130/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 131/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 132/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 133/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : re2-1:20211101-20.el9.x86_64 134/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : s3cmd-2.4.0-1.el9.noarch 135/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 136/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados2-2:20.2.0-712.g70f8415b.el9.x86_64 137/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 138/140 2026-03-20T18:26:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 139/140 2026-03-20T18:26:01.283 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T18:26:01.283 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:01.283 INFO:teuthology.orchestra.run.vm05.stdout:Upgraded: 2026-03-20T18:26:01.283 INFO:teuthology.orchestra.run.vm05.stdout: librados2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.283 INFO:teuthology.orchestra.run.vm05.stdout: librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.283 INFO:teuthology.orchestra.run.vm05.stdout:Installed: 2026-03-20T18:26:01.283 INFO:teuthology.orchestra.run.vm05.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-20T18:26:01.283 INFO:teuthology.orchestra.run.vm05.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-20T18:26:01.283 INFO:teuthology.orchestra.run.vm05.stdout: bzip2-1.0.8-11.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ceph-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: cephadm-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: fuse-2.9.9-17.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: lua-5.4.4-4.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-20T18:26:01.284 INFO:teuthology.orchestra.run.vm05.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: perl-Benchmark-1.23-483.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: perl-Test-Harness-1:3.42-461.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply-3.11-14.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyparsing-2.4.7-9.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-20T18:26:01.285 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-20T18:26:01.286 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-20T18:26:01.286 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-20T18:26:01.286 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-20T18:26:01.286 INFO:teuthology.orchestra.run.vm05.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-20T18:26:01.286 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-20T18:26:01.286 INFO:teuthology.orchestra.run.vm05.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-20T18:26:01.286 INFO:teuthology.orchestra.run.vm05.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-20T18:26:01.286 INFO:teuthology.orchestra.run.vm05.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-20T18:26:01.286 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.286 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.286 INFO:teuthology.orchestra.run.vm05.stdout: rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T18:26:01.286 INFO:teuthology.orchestra.run.vm05.stdout: re2-1:20211101-20.el9.x86_64 2026-03-20T18:26:01.286 INFO:teuthology.orchestra.run.vm05.stdout: s3cmd-2.4.0-1.el9.noarch 2026-03-20T18:26:01.286 INFO:teuthology.orchestra.run.vm05.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-20T18:26:01.286 INFO:teuthology.orchestra.run.vm05.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-20T18:26:01.286 INFO:teuthology.orchestra.run.vm05.stdout: unzip-6.0-59.el9.x86_64 2026-03-20T18:26:01.286 INFO:teuthology.orchestra.run.vm05.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-20T18:26:01.286 INFO:teuthology.orchestra.run.vm05.stdout: zip-3.0-35.el9.x86_64 2026-03-20T18:26:01.286 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:01.286 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-20T18:26:01.401 DEBUG:teuthology.parallel:result is None 2026-03-20T18:26:01.401 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-20T18:26:02.028 DEBUG:teuthology.orchestra.run.vm00:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-20T18:26:02.049 INFO:teuthology.orchestra.run.vm00.stdout:20.2.0-712.g70f8415b.el9 2026-03-20T18:26:02.050 INFO:teuthology.packaging:The installed version of ceph is 20.2.0-712.g70f8415b.el9 2026-03-20T18:26:02.050 INFO:teuthology.task.install:The correct ceph version 20.2.0-712.g70f8415b is installed. 2026-03-20T18:26:02.051 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-20T18:26:02.630 DEBUG:teuthology.orchestra.run.vm02:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-20T18:26:02.650 INFO:teuthology.orchestra.run.vm02.stdout:20.2.0-712.g70f8415b.el9 2026-03-20T18:26:02.650 INFO:teuthology.packaging:The installed version of ceph is 20.2.0-712.g70f8415b.el9 2026-03-20T18:26:02.650 INFO:teuthology.task.install:The correct ceph version 20.2.0-712.g70f8415b is installed. 2026-03-20T18:26:02.652 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-20T18:26:03.276 DEBUG:teuthology.orchestra.run.vm05:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-20T18:26:03.299 INFO:teuthology.orchestra.run.vm05.stdout:20.2.0-712.g70f8415b.el9 2026-03-20T18:26:03.299 INFO:teuthology.packaging:The installed version of ceph is 20.2.0-712.g70f8415b.el9 2026-03-20T18:26:03.299 INFO:teuthology.task.install:The correct ceph version 20.2.0-712.g70f8415b is installed. 2026-03-20T18:26:03.300 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-20T18:26:03.300 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:26:03.300 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-20T18:26:03.331 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-20T18:26:03.331 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-20T18:26:03.359 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-20T18:26:03.359 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-20T18:26:03.388 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-20T18:26:03.389 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:26:03.389 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/daemon-helper 2026-03-20T18:26:03.416 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-20T18:26:03.483 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-20T18:26:03.483 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/usr/bin/daemon-helper 2026-03-20T18:26:03.507 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-20T18:26:03.573 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-20T18:26:03.573 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/daemon-helper 2026-03-20T18:26:03.600 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-20T18:26:03.667 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-20T18:26:03.668 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:26:03.668 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-20T18:26:03.695 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-20T18:26:03.763 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-20T18:26:03.763 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-20T18:26:03.786 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-20T18:26:03.851 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-20T18:26:03.851 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-20T18:26:03.877 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-20T18:26:03.944 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-20T18:26:03.944 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:26:03.944 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/stdin-killer 2026-03-20T18:26:03.972 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-20T18:26:04.038 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-20T18:26:04.039 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/usr/bin/stdin-killer 2026-03-20T18:26:04.064 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-20T18:26:04.130 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-20T18:26:04.130 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/stdin-killer 2026-03-20T18:26:04.159 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-20T18:26:04.229 INFO:teuthology.run_tasks:Running task ceph... 2026-03-20T18:26:04.273 INFO:tasks.ceph:Making ceph log dir writeable by non-root... 2026-03-20T18:26:04.273 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 777 /var/log/ceph 2026-03-20T18:26:04.275 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod 777 /var/log/ceph 2026-03-20T18:26:04.277 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 777 /var/log/ceph 2026-03-20T18:26:04.304 INFO:tasks.ceph:Disabling ceph logrotate... 2026-03-20T18:26:04.304 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /etc/logrotate.d/ceph 2026-03-20T18:26:04.338 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f -- /etc/logrotate.d/ceph 2026-03-20T18:26:04.342 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f -- /etc/logrotate.d/ceph 2026-03-20T18:26:04.373 INFO:tasks.ceph:Creating extra log directories... 2026-03-20T18:26:04.373 DEBUG:teuthology.orchestra.run.vm00:> sudo install -d -m0777 -- /var/log/ceph/valgrind /var/log/ceph/profiling-logger 2026-03-20T18:26:04.405 DEBUG:teuthology.orchestra.run.vm02:> sudo install -d -m0777 -- /var/log/ceph/valgrind /var/log/ceph/profiling-logger 2026-03-20T18:26:04.407 DEBUG:teuthology.orchestra.run.vm05:> sudo install -d -m0777 -- /var/log/ceph/valgrind /var/log/ceph/profiling-logger 2026-03-20T18:26:04.443 INFO:tasks.ceph:Creating ceph cluster ceph... 2026-03-20T18:26:04.443 INFO:tasks.ceph:config {'conf': {'client': {'debug rgw': 20, 'debug rgw dedup': 20, 'setgroup': 'ceph', 'setuser': 'ceph'}, 'global': {'osd_max_pg_log_entries': 10, 'osd_min_pg_log_entries': 10}, 'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'bdev async discard': True, 'bdev enable discard': True, 'bluestore allocator': 'bitmap', 'bluestore block size': 96636764160, 'bluestore fsck on mount': True, 'debug bluefs': '1/20', 'debug bluestore': '1/20', 'debug ms': 1, 'debug osd': 20, 'debug rocksdb': '4/10', 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd failsafe full ratio': 0.95, 'osd mclock iops capacity threshold hdd': 49000, 'osd objectstore': 'bluestore', 'osd shutdown pgref assert': True}}, 'fs': 'xfs', 'mkfs_options': None, 'mount_options': None, 'skip_mgr_daemons': False, 'log_ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', '\\(PG_AVAILABILITY\\)', '\\(PG_DEGRADED\\)', '\\(POOL_APP_NOT_ENABLED\\)', 'not have an application enabled'], 'cpu_profile': set(), 'cluster': 'ceph', 'mon_bind_msgr2': True, 'mon_bind_addrvec': True} 2026-03-20T18:26:04.443 INFO:tasks.ceph:ctx.config {'archive_path': '/archive/kyr-2026-03-20_18:10:20-rgw-tentacle-none-default-vps/2719', 'branch': 'tentacle', 'description': 'rgw/dedup/{beast bluestore-bitmap fixed-3-rgw ignore-pg-availability overrides supported-distros/{centos_latest} tasks/{0-install test_dedup}}', 'email': None, 'first_in_suite': False, 'flavor': 'default', 'job_id': '2719', 'last_in_suite': False, 'machine_type': 'vps', 'name': 'kyr-2026-03-20_18:10:20-rgw-tentacle-none-default-vps', 'no_nested_subset': False, 'openstack': [{'volumes': {'count': 4, 'size': 10}}], 'os_type': 'centos', 'os_version': '9.stream', 'overrides': {'admin_socket': {'branch': 'tentacle'}, 'ansible.cephlab': {'branch': 'main', 'repo': 'https://github.com/kshtsk/ceph-cm-ansible.git', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'logical_volumes': {'lv_1': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_2': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_3': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_4': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}}, 'timezone': 'UTC', 'volume_groups': {'vg_nvme': {'pvs': '/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde'}}}}, 'ceph': {'conf': {'client': {'debug rgw': 20, 'debug rgw dedup': 20, 'setgroup': 'ceph', 'setuser': 'ceph'}, 'global': {'osd_max_pg_log_entries': 10, 'osd_min_pg_log_entries': 10}, 'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'bdev async discard': True, 'bdev enable discard': True, 'bluestore allocator': 'bitmap', 'bluestore block size': 96636764160, 'bluestore fsck on mount': True, 'debug bluefs': '1/20', 'debug bluestore': '1/20', 'debug ms': 1, 'debug osd': 20, 'debug rocksdb': '4/10', 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd failsafe full ratio': 0.95, 'osd mclock iops capacity threshold hdd': 49000, 'osd objectstore': 'bluestore', 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'fs': 'xfs', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', '\\(PG_AVAILABILITY\\)', '\\(PG_DEGRADED\\)', '\\(POOL_APP_NOT_ENABLED\\)', 'not have an application enabled'], 'sha1': '70f8415b300f041766fa27faf7d5472699e32388'}, 'ceph-deploy': {'bluestore': True, 'conf': {'client': {'log file': '/var/log/ceph/ceph-$name.$pid.log'}, 'mon': {}, 'osd': {'bdev async discard': True, 'bdev enable discard': True, 'bluestore block size': 96636764160, 'bluestore fsck on mount': True, 'debug bluefs': '1/20', 'debug bluestore': '1/20', 'debug rocksdb': '4/10', 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd failsafe full ratio': 0.95, 'osd objectstore': 'bluestore'}}, 'fs': 'xfs'}, 'cephadm': {'cephadm_binary_url': 'https://download.ceph.com/rpm-20.2.0/el9/noarch/cephadm'}, 'install': {'ceph': {'flavor': 'default', 'sha1': '70f8415b300f041766fa27faf7d5472699e32388'}, 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}}, 'rgw': {'frontend': 'beast', 'storage classes': {'FROZEN': None, 'LUKEWARM': None}}, 'thrashosds': {'bdev_inject_crash': 2, 'bdev_inject_crash_probability': 0.5}, 'workunit': {'branch': 'tt-tentacle', 'sha1': '938e12e80b676435f28993327ab6082a0d57e922'}}, 'owner': 'kyr', 'priority': 1000, 'repo': 'https://github.com/ceph/ceph.git', 'roles': [['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0'], ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1'], ['client.2']], 'seed': 9676, 'sha1': '70f8415b300f041766fa27faf7d5472699e32388', 'sleep_before_teardown': 0, 'suite': 'rgw', 'suite_branch': 'tt-tentacle', 'suite_path': '/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa', 'suite_relpath': 'qa', 'suite_repo': 'https://github.com/kshtsk/ceph.git', 'suite_sha1': '938e12e80b676435f28993327ab6082a0d57e922', 'targets': {'vm00.local': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHgRJrHOZyqTVAoIakGGfMNHQqM2D7IKMDlZ3KBkehSsuc30OZ+snHqbcDv3ViWEzoMxVJzcTlzwMF9LAAKreyU=', 'vm02.local': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLHcJwQcYSeuAFCeT1rgGP6uxiInXVH0Tl0QotS7NIUfDkpdn09b9jmpmv1ADNotz13xr2oAJiPMtE4sPnXZeLo=', 'vm05.local': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEWi21wlYfNkmZrMXDcXr9wyDZJ87iDLDe4kCHMZgVRj2Mx32g/A5kbCBNwUCFHtPO/dvch4xUKrN4mpzVZIKk0='}, 'tasks': [{'internal.check_packages': None}, {'internal.buildpackages_prep': None}, {'internal.save_config': None}, {'internal.check_lock': None}, {'internal.add_remotes': None}, {'console_log': None}, {'internal.connect': None}, {'internal.push_inventory': None}, {'internal.serialize_remote_roles': None}, {'internal.check_conflict': None}, {'internal.check_ceph_data': None}, {'internal.vm_setup': None}, {'internal.base': None}, {'internal.archive_upload': None}, {'internal.archive': None}, {'internal.coredump': None}, {'internal.sudo': None}, {'internal.syslog': None}, {'internal.timer': None}, {'pcp': None}, {'selinux': None}, {'ansible.cephlab': None}, {'clock': None}, {'install': None}, {'ceph': None}, {'openssl_keys': None}, {'rgw': ['client.0', 'client.1', 'client.2']}, {'tox': ['client.0']}, {'tox': ['client.0']}, {'dedup-tests': {'client.0': {'rgw_server': 'client.0'}}}], 'teuthology': {'fragments_dropped': [], 'meta': {}, 'postmerge': []}, 'teuthology_branch': 'clyso-debian-13', 'teuthology_repo': 'https://github.com/clyso/teuthology', 'teuthology_sha1': '1c580df7a9c7c2aadc272da296344fd99f27c444', 'timestamp': '2026-03-20_18:10:20', 'tube': 'vps', 'user': 'kyr', 'verbose': False, 'worker_log': '/home/teuthos/.teuthology/dispatcher/dispatcher.vps.4188345'} 2026-03-20T18:26:04.444 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/ceph.data 2026-03-20T18:26:04.475 DEBUG:teuthology.orchestra.run.vm02:> install -d -m0755 -- /home/ubuntu/cephtest/ceph.data 2026-03-20T18:26:04.477 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/ceph.data 2026-03-20T18:26:04.498 DEBUG:teuthology.orchestra.run.vm00:> sudo install -d -m0777 -- /var/run/ceph 2026-03-20T18:26:04.534 DEBUG:teuthology.orchestra.run.vm02:> sudo install -d -m0777 -- /var/run/ceph 2026-03-20T18:26:04.536 DEBUG:teuthology.orchestra.run.vm05:> sudo install -d -m0777 -- /var/run/ceph 2026-03-20T18:26:04.567 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:26:04.567 DEBUG:teuthology.orchestra.run.vm00:> dd if=/scratch_devs of=/dev/stdout 2026-03-20T18:26:04.618 DEBUG:teuthology.misc:devs=['/dev/vg_nvme/lv_1', '/dev/vg_nvme/lv_2', '/dev/vg_nvme/lv_3', '/dev/vg_nvme/lv_4'] 2026-03-20T18:26:04.618 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vg_nvme/lv_1 2026-03-20T18:26:04.674 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vg_nvme/lv_1 -> ../dm-0 2026-03-20T18:26:04.675 INFO:teuthology.orchestra.run.vm00.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-20T18:26:04.675 INFO:teuthology.orchestra.run.vm00.stdout:Device: 6h/6d Inode: 642 Links: 1 2026-03-20T18:26:04.675 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-20T18:26:04.675 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:device_t:s0 2026-03-20T18:26:04.675 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-20 18:25:56.011652966 +0000 2026-03-20T18:26:04.675 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-20 18:24:14.925120287 +0000 2026-03-20T18:26:04.675 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-20 18:24:14.925120287 +0000 2026-03-20T18:26:04.675 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-03-20 18:24:14.925120287 +0000 2026-03-20T18:26:04.675 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vg_nvme/lv_1 of=/dev/null count=1 2026-03-20T18:26:04.742 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-20T18:26:04.742 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-20T18:26:04.742 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000136966 s, 3.7 MB/s 2026-03-20T18:26:04.743 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_1 2026-03-20T18:26:04.804 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vg_nvme/lv_2 2026-03-20T18:26:04.865 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vg_nvme/lv_2 -> ../dm-1 2026-03-20T18:26:04.865 INFO:teuthology.orchestra.run.vm00.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-20T18:26:04.865 INFO:teuthology.orchestra.run.vm00.stdout:Device: 6h/6d Inode: 693 Links: 1 2026-03-20T18:26:04.865 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-20T18:26:04.865 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:device_t:s0 2026-03-20T18:26:04.865 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-20 18:25:56.011652966 +0000 2026-03-20T18:26:04.865 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-20 18:24:15.183120591 +0000 2026-03-20T18:26:04.865 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-20 18:24:15.183120591 +0000 2026-03-20T18:26:04.865 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-03-20 18:24:15.183120591 +0000 2026-03-20T18:26:04.865 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vg_nvme/lv_2 of=/dev/null count=1 2026-03-20T18:26:04.930 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-20T18:26:04.931 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-20T18:26:04.931 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000203652 s, 2.5 MB/s 2026-03-20T18:26:04.931 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_2 2026-03-20T18:26:04.988 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vg_nvme/lv_3 2026-03-20T18:26:05.046 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vg_nvme/lv_3 -> ../dm-2 2026-03-20T18:26:05.046 INFO:teuthology.orchestra.run.vm00.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-20T18:26:05.046 INFO:teuthology.orchestra.run.vm00.stdout:Device: 6h/6d Inode: 709 Links: 1 2026-03-20T18:26:05.046 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-20T18:26:05.046 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:device_t:s0 2026-03-20T18:26:05.046 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-20 18:25:56.012652967 +0000 2026-03-20T18:26:05.046 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-20 18:24:15.460120918 +0000 2026-03-20T18:26:05.046 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-20 18:24:15.460120918 +0000 2026-03-20T18:26:05.046 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-03-20 18:24:15.460120918 +0000 2026-03-20T18:26:05.046 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vg_nvme/lv_3 of=/dev/null count=1 2026-03-20T18:26:05.110 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-20T18:26:05.110 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-20T18:26:05.110 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000170078 s, 3.0 MB/s 2026-03-20T18:26:05.111 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_3 2026-03-20T18:26:05.167 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vg_nvme/lv_4 2026-03-20T18:26:05.226 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vg_nvme/lv_4 -> ../dm-3 2026-03-20T18:26:05.226 INFO:teuthology.orchestra.run.vm00.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-20T18:26:05.226 INFO:teuthology.orchestra.run.vm00.stdout:Device: 6h/6d Inode: 746 Links: 1 2026-03-20T18:26:05.226 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-20T18:26:05.226 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:device_t:s0 2026-03-20T18:26:05.226 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-20 18:25:56.012652967 +0000 2026-03-20T18:26:05.226 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-20 18:24:15.727121233 +0000 2026-03-20T18:26:05.226 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-20 18:24:15.727121233 +0000 2026-03-20T18:26:05.226 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-03-20 18:24:15.727121233 +0000 2026-03-20T18:26:05.226 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vg_nvme/lv_4 of=/dev/null count=1 2026-03-20T18:26:05.292 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-20T18:26:05.292 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-20T18:26:05.292 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000141434 s, 3.6 MB/s 2026-03-20T18:26:05.293 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_4 2026-03-20T18:26:05.350 INFO:tasks.ceph:osd dev map: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'} 2026-03-20T18:26:05.350 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-20T18:26:05.350 DEBUG:teuthology.orchestra.run.vm02:> dd if=/scratch_devs of=/dev/stdout 2026-03-20T18:26:05.366 DEBUG:teuthology.misc:devs=['/dev/vg_nvme/lv_1', '/dev/vg_nvme/lv_2', '/dev/vg_nvme/lv_3', '/dev/vg_nvme/lv_4'] 2026-03-20T18:26:05.367 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vg_nvme/lv_1 2026-03-20T18:26:05.423 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vg_nvme/lv_1 -> ../dm-0 2026-03-20T18:26:05.423 INFO:teuthology.orchestra.run.vm02.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-20T18:26:05.423 INFO:teuthology.orchestra.run.vm02.stdout:Device: 6h/6d Inode: 663 Links: 1 2026-03-20T18:26:05.423 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-20T18:26:05.423 INFO:teuthology.orchestra.run.vm02.stdout:Context: system_u:object_r:device_t:s0 2026-03-20T18:26:05.423 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-20 18:25:50.698527997 +0000 2026-03-20T18:26:05.423 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-20 18:23:55.382396160 +0000 2026-03-20T18:26:05.423 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-20 18:23:55.382396160 +0000 2026-03-20T18:26:05.423 INFO:teuthology.orchestra.run.vm02.stdout: Birth: 2026-03-20 18:23:55.382396160 +0000 2026-03-20T18:26:05.423 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vg_nvme/lv_1 of=/dev/null count=1 2026-03-20T18:26:05.488 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-20T18:26:05.488 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-20T18:26:05.488 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000161271 s, 3.2 MB/s 2026-03-20T18:26:05.489 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_1 2026-03-20T18:26:05.547 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vg_nvme/lv_2 2026-03-20T18:26:05.606 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vg_nvme/lv_2 -> ../dm-1 2026-03-20T18:26:05.606 INFO:teuthology.orchestra.run.vm02.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-20T18:26:05.606 INFO:teuthology.orchestra.run.vm02.stdout:Device: 6h/6d Inode: 701 Links: 1 2026-03-20T18:26:05.606 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-20T18:26:05.606 INFO:teuthology.orchestra.run.vm02.stdout:Context: system_u:object_r:device_t:s0 2026-03-20T18:26:05.606 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-20 18:25:50.698527997 +0000 2026-03-20T18:26:05.606 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-20 18:23:55.621396404 +0000 2026-03-20T18:26:05.606 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-20 18:23:55.621396404 +0000 2026-03-20T18:26:05.606 INFO:teuthology.orchestra.run.vm02.stdout: Birth: 2026-03-20 18:23:55.621396404 +0000 2026-03-20T18:26:05.606 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vg_nvme/lv_2 of=/dev/null count=1 2026-03-20T18:26:05.669 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-20T18:26:05.669 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-20T18:26:05.669 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000183353 s, 2.8 MB/s 2026-03-20T18:26:05.670 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_2 2026-03-20T18:26:05.725 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vg_nvme/lv_3 2026-03-20T18:26:05.782 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vg_nvme/lv_3 -> ../dm-2 2026-03-20T18:26:05.782 INFO:teuthology.orchestra.run.vm02.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-20T18:26:05.782 INFO:teuthology.orchestra.run.vm02.stdout:Device: 6h/6d Inode: 741 Links: 1 2026-03-20T18:26:05.782 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-20T18:26:05.782 INFO:teuthology.orchestra.run.vm02.stdout:Context: system_u:object_r:device_t:s0 2026-03-20T18:26:05.782 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-20 18:25:50.698527997 +0000 2026-03-20T18:26:05.782 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-20 18:23:55.883396672 +0000 2026-03-20T18:26:05.782 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-20 18:23:55.883396672 +0000 2026-03-20T18:26:05.782 INFO:teuthology.orchestra.run.vm02.stdout: Birth: 2026-03-20 18:23:55.883396672 +0000 2026-03-20T18:26:05.782 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vg_nvme/lv_3 of=/dev/null count=1 2026-03-20T18:26:05.848 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-20T18:26:05.848 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-20T18:26:05.848 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.00017183 s, 3.0 MB/s 2026-03-20T18:26:05.850 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_3 2026-03-20T18:26:05.907 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vg_nvme/lv_4 2026-03-20T18:26:05.965 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vg_nvme/lv_4 -> ../dm-3 2026-03-20T18:26:05.965 INFO:teuthology.orchestra.run.vm02.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-20T18:26:05.965 INFO:teuthology.orchestra.run.vm02.stdout:Device: 6h/6d Inode: 764 Links: 1 2026-03-20T18:26:05.965 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-20T18:26:05.965 INFO:teuthology.orchestra.run.vm02.stdout:Context: system_u:object_r:device_t:s0 2026-03-20T18:26:05.965 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-20 18:25:50.699527999 +0000 2026-03-20T18:26:05.966 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-20 18:23:56.111396904 +0000 2026-03-20T18:26:05.966 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-20 18:23:56.111396904 +0000 2026-03-20T18:26:05.966 INFO:teuthology.orchestra.run.vm02.stdout: Birth: 2026-03-20 18:23:56.111396904 +0000 2026-03-20T18:26:05.966 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vg_nvme/lv_4 of=/dev/null count=1 2026-03-20T18:26:06.027 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-20T18:26:06.027 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-20T18:26:06.027 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000162223 s, 3.2 MB/s 2026-03-20T18:26:06.028 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_4 2026-03-20T18:26:06.085 INFO:tasks.ceph:osd dev map: {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'} 2026-03-20T18:26:06.085 INFO:tasks.ceph:remote_to_roles_to_devs: {Remote(name='ubuntu@vm00.local'): {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'}, Remote(name='ubuntu@vm02.local'): {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'}} 2026-03-20T18:26:06.085 INFO:tasks.ceph:Generating config... 2026-03-20T18:26:06.086 INFO:tasks.ceph:[client] debug rgw = 20 2026-03-20T18:26:06.086 INFO:tasks.ceph:[client] debug rgw dedup = 20 2026-03-20T18:26:06.086 INFO:tasks.ceph:[client] setgroup = ceph 2026-03-20T18:26:06.086 INFO:tasks.ceph:[client] setuser = ceph 2026-03-20T18:26:06.086 INFO:tasks.ceph:[global] osd_max_pg_log_entries = 10 2026-03-20T18:26:06.086 INFO:tasks.ceph:[global] osd_min_pg_log_entries = 10 2026-03-20T18:26:06.086 INFO:tasks.ceph:[mgr] debug mgr = 20 2026-03-20T18:26:06.086 INFO:tasks.ceph:[mgr] debug ms = 1 2026-03-20T18:26:06.086 INFO:tasks.ceph:[mon] debug mon = 20 2026-03-20T18:26:06.086 INFO:tasks.ceph:[mon] debug ms = 1 2026-03-20T18:26:06.086 INFO:tasks.ceph:[mon] debug paxos = 20 2026-03-20T18:26:06.086 INFO:tasks.ceph:[osd] bdev async discard = True 2026-03-20T18:26:06.086 INFO:tasks.ceph:[osd] bdev enable discard = True 2026-03-20T18:26:06.086 INFO:tasks.ceph:[osd] bluestore allocator = bitmap 2026-03-20T18:26:06.086 INFO:tasks.ceph:[osd] bluestore block size = 96636764160 2026-03-20T18:26:06.086 INFO:tasks.ceph:[osd] bluestore fsck on mount = True 2026-03-20T18:26:06.086 INFO:tasks.ceph:[osd] debug bluefs = 1/20 2026-03-20T18:26:06.086 INFO:tasks.ceph:[osd] debug bluestore = 1/20 2026-03-20T18:26:06.086 INFO:tasks.ceph:[osd] debug ms = 1 2026-03-20T18:26:06.086 INFO:tasks.ceph:[osd] debug osd = 20 2026-03-20T18:26:06.086 INFO:tasks.ceph:[osd] debug rocksdb = 4/10 2026-03-20T18:26:06.086 INFO:tasks.ceph:[osd] mon osd backfillfull_ratio = 0.85 2026-03-20T18:26:06.086 INFO:tasks.ceph:[osd] mon osd full ratio = 0.9 2026-03-20T18:26:06.086 INFO:tasks.ceph:[osd] mon osd nearfull ratio = 0.8 2026-03-20T18:26:06.086 INFO:tasks.ceph:[osd] osd failsafe full ratio = 0.95 2026-03-20T18:26:06.086 INFO:tasks.ceph:[osd] osd mclock iops capacity threshold hdd = 49000 2026-03-20T18:26:06.087 INFO:tasks.ceph:[osd] osd objectstore = bluestore 2026-03-20T18:26:06.087 INFO:tasks.ceph:[osd] osd shutdown pgref assert = True 2026-03-20T18:26:06.087 INFO:tasks.ceph:Setting up mon.a... 2026-03-20T18:26:06.087 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring /etc/ceph/ceph.keyring 2026-03-20T18:26:06.128 INFO:teuthology.orchestra.run.vm00.stdout:creating /etc/ceph/ceph.keyring 2026-03-20T18:26:06.132 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --gen-key --name=mon. /etc/ceph/ceph.keyring 2026-03-20T18:26:06.212 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-03-20T18:26:06.277 DEBUG:tasks.ceph:Ceph mon addresses: [('mon.a', '192.168.123.100'), ('mon.c', '[v2:192.168.123.100:3301,v1:192.168.123.100:6790]'), ('mon.b', '192.168.123.102')] 2026-03-20T18:26:06.278 DEBUG:tasks.ceph:writing out conf {'global': {'chdir': '', 'pid file': '/var/run/ceph/$cluster-$name.pid', 'auth supported': 'cephx', 'filestore xattr use omap': 'true', 'mon clock drift allowed': '1.000', 'osd crush chooseleaf type': '0', 'auth debug': 'true', 'ms die on old message': 'true', 'ms die on bug': 'true', 'mon max pg per osd': '10000', 'mon pg warn max object skew': '0', 'osd_pool_default_pg_autoscale_mode': 'off', 'osd pool default size': '2', 'mon osd allow primary affinity': 'true', 'mon osd allow pg remap': 'true', 'mon warn on legacy crush tunables': 'false', 'mon warn on crush straw calc version zero': 'false', 'mon warn on no sortbitwise': 'false', 'mon warn on osd down out interval zero': 'false', 'mon warn on too few osds': 'false', 'mon_warn_on_pool_pg_num_not_power_of_two': 'false', 'mon_warn_on_pool_no_redundancy': 'false', 'mon_allow_pool_size_one': 'true', 'osd pool default erasure code profile': 'plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd', 'osd default data pool replay window': '5', 'mon allow pool delete': 'true', 'mon cluster log file level': 'debug', 'debug asserts on shutdown': 'true', 'mon health detail to clog': 'false', 'mon host': '192.168.123.100,[v2:192.168.123.100:3301,v1:192.168.123.100:6790],192.168.123.102', 'osd_max_pg_log_entries': 10, 'osd_min_pg_log_entries': 10}, 'osd': {'osd journal size': '100', 'osd scrub load threshold': '5.0', 'osd scrub max interval': '600', 'osd mclock profile': 'high_recovery_ops', 'osd mclock skip benchmark': 'true', 'osd recover clone overlap': 'true', 'osd recovery max chunk': '1048576', 'osd debug shutdown': 'true', 'osd debug op order': 'true', 'osd debug verify stray on activate': 'true', 'osd debug trim objects': 'true', 'osd open classes on start': 'true', 'osd debug pg log writeout': 'true', 'osd deep scrub update digest min age': '30', 'osd map max advance': '10', 'journal zero on create': 'true', 'filestore ondisk finisher threads': '3', 'filestore apply finisher threads': '3', 'bdev debug aio': 'true', 'osd debug misdirected ops': 'true', 'bdev async discard': True, 'bdev enable discard': True, 'bluestore allocator': 'bitmap', 'bluestore block size': 96636764160, 'bluestore fsck on mount': True, 'debug bluefs': '1/20', 'debug bluestore': '1/20', 'debug ms': 1, 'debug osd': 20, 'debug rocksdb': '4/10', 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd failsafe full ratio': 0.95, 'osd mclock iops capacity threshold hdd': 49000, 'osd objectstore': 'bluestore', 'osd shutdown pgref assert': True}, 'mgr': {'debug ms': 1, 'debug mgr': 20, 'debug mon': '20', 'debug auth': '20', 'mon reweight min pgs per osd': '4', 'mon reweight min bytes per osd': '10', 'mgr/telemetry/nag': 'false'}, 'mon': {'debug ms': 1, 'debug mon': 20, 'debug paxos': 20, 'debug auth': '20', 'mon data avail warn': '5', 'mon mgr mkfs grace': '240', 'mon reweight min pgs per osd': '4', 'mon osd reporter subtree level': 'osd', 'mon osd prime pg temp': 'true', 'mon reweight min bytes per osd': '10', 'auth mon ticket ttl': '660', 'auth service ticket ttl': '240', 'mon_warn_on_insecure_global_id_reclaim': 'false', 'mon_warn_on_insecure_global_id_reclaim_allowed': 'false', 'mon_down_mkfs_grace': '2m', 'mon_warn_on_filestore_osds': 'false'}, 'client': {'rgw cache enabled': 'true', 'rgw enable ops log': 'true', 'rgw enable usage log': 'true', 'log file': '/var/log/ceph/$cluster-$name.$pid.log', 'admin socket': '/var/run/ceph/$cluster-$name.$pid.asok', 'debug rgw': 20, 'debug rgw dedup': 20, 'setgroup': 'ceph', 'setuser': 'ceph'}, 'mon.a': {}, 'mon.c': {}, 'mon.b': {}} 2026-03-20T18:26:06.278 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:26:06.278 DEBUG:teuthology.orchestra.run.vm00:> dd of=/home/ubuntu/cephtest/ceph.tmp.conf 2026-03-20T18:26:06.333 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage monmaptool -c /home/ubuntu/cephtest/ceph.tmp.conf --create --clobber --enable-all-features --add a 192.168.123.100 --addv c '[v2:192.168.123.100:3301,v1:192.168.123.100:6790]' --add b 192.168.123.102 --print /home/ubuntu/cephtest/ceph.monmap 2026-03-20T18:26:06.408 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-03-20T18:26:06.408 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T18:26:06.408 INFO:teuthology.orchestra.run.vm00.stdout:monmaptool: monmap file /home/ubuntu/cephtest/ceph.monmap 2026-03-20T18:26:06.408 INFO:teuthology.orchestra.run.vm00.stdout:monmaptool: generated fsid e1f9fff1-39d0-4146-abc7-3b481a096f4f 2026-03-20T18:26:06.408 INFO:teuthology.orchestra.run.vm00.stdout:setting min_mon_release = tentacle 2026-03-20T18:26:06.408 INFO:teuthology.orchestra.run.vm00.stdout:epoch 0 2026-03-20T18:26:06.408 INFO:teuthology.orchestra.run.vm00.stdout:fsid e1f9fff1-39d0-4146-abc7-3b481a096f4f 2026-03-20T18:26:06.408 INFO:teuthology.orchestra.run.vm00.stdout:last_changed 2026-03-20T18:26:06.407880+0000 2026-03-20T18:26:06.408 INFO:teuthology.orchestra.run.vm00.stdout:created 2026-03-20T18:26:06.407880+0000 2026-03-20T18:26:06.408 INFO:teuthology.orchestra.run.vm00.stdout:min_mon_release 20 (tentacle) 2026-03-20T18:26:06.408 INFO:teuthology.orchestra.run.vm00.stdout:election_strategy: 1 2026-03-20T18:26:06.408 INFO:teuthology.orchestra.run.vm00.stdout:0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-20T18:26:06.408 INFO:teuthology.orchestra.run.vm00.stdout:1: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.b 2026-03-20T18:26:06.408 INFO:teuthology.orchestra.run.vm00.stdout:2: [v2:192.168.123.100:3301/0,v1:192.168.123.100:6790/0] mon.c 2026-03-20T18:26:06.408 INFO:teuthology.orchestra.run.vm00.stdout:monmaptool: writing epoch 0 to /home/ubuntu/cephtest/ceph.monmap (3 monitors) 2026-03-20T18:26:06.410 DEBUG:teuthology.orchestra.run.vm00:> rm -- /home/ubuntu/cephtest/ceph.tmp.conf 2026-03-20T18:26:06.465 INFO:tasks.ceph:Writing /etc/ceph/ceph.conf for FSID e1f9fff1-39d0-4146-abc7-3b481a096f4f... 2026-03-20T18:26:06.466 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /etc/ceph && sudo chmod 0755 /etc/ceph && sudo tee /etc/ceph/ceph.conf && sudo chmod 0644 /etc/ceph/ceph.conf > /dev/null 2026-03-20T18:26:06.508 DEBUG:teuthology.orchestra.run.vm02:> sudo mkdir -p /etc/ceph && sudo chmod 0755 /etc/ceph && sudo tee /etc/ceph/ceph.conf && sudo chmod 0644 /etc/ceph/ceph.conf > /dev/null 2026-03-20T18:26:06.510 DEBUG:teuthology.orchestra.run.vm05:> sudo mkdir -p /etc/ceph && sudo chmod 0755 /etc/ceph && sudo tee /etc/ceph/ceph.conf && sudo chmod 0644 /etc/ceph/ceph.conf > /dev/null 2026-03-20T18:26:06.551 INFO:teuthology.orchestra.run.vm00.stdout:[global] 2026-03-20T18:26:06.551 INFO:teuthology.orchestra.run.vm00.stdout: chdir = "" 2026-03-20T18:26:06.551 INFO:teuthology.orchestra.run.vm00.stdout: pid file = /var/run/ceph/$cluster-$name.pid 2026-03-20T18:26:06.551 INFO:teuthology.orchestra.run.vm00.stdout: auth supported = cephx 2026-03-20T18:26:06.551 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.551 INFO:teuthology.orchestra.run.vm00.stdout: filestore xattr use omap = true 2026-03-20T18:26:06.551 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.551 INFO:teuthology.orchestra.run.vm00.stdout: mon clock drift allowed = 1.000 2026-03-20T18:26:06.551 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.551 INFO:teuthology.orchestra.run.vm00.stdout: osd crush chooseleaf type = 0 2026-03-20T18:26:06.551 INFO:teuthology.orchestra.run.vm00.stdout: auth debug = true 2026-03-20T18:26:06.551 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.551 INFO:teuthology.orchestra.run.vm00.stdout: ms die on old message = true 2026-03-20T18:26:06.551 INFO:teuthology.orchestra.run.vm00.stdout: ms die on bug = true 2026-03-20T18:26:06.551 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.551 INFO:teuthology.orchestra.run.vm00.stdout: mon max pg per osd = 10000 # >= luminous 2026-03-20T18:26:06.551 INFO:teuthology.orchestra.run.vm00.stdout: mon pg warn max object skew = 0 2026-03-20T18:26:06.551 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.551 INFO:teuthology.orchestra.run.vm00.stdout: # disable pg_autoscaler by default for new pools 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd_pool_default_pg_autoscale_mode = off 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd pool default size = 2 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: mon osd allow primary affinity = true 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: mon osd allow pg remap = true 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: mon warn on legacy crush tunables = false 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: mon warn on crush straw calc version zero = false 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: mon warn on no sortbitwise = false 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: mon warn on osd down out interval zero = false 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: mon warn on too few osds = false 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: mon_warn_on_pool_pg_num_not_power_of_two = false 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: mon_warn_on_pool_no_redundancy = false 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: mon_allow_pool_size_one = true 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd pool default erasure code profile = plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd default data pool replay window = 5 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: mon allow pool delete = true 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: mon cluster log file level = debug 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: debug asserts on shutdown = true 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: mon health detail to clog = false 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: mon host = "192.168.123.100,[v2:192.168.123.100:3301,v1:192.168.123.100:6790],192.168.123.102" 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd_max_pg_log_entries = 10 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd_min_pg_log_entries = 10 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: fsid = e1f9fff1-39d0-4146-abc7-3b481a096f4f 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout:[osd] 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd journal size = 100 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd scrub load threshold = 5.0 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd scrub max interval = 600 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd mclock profile = high_recovery_ops 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd mclock skip benchmark = true 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd recover clone overlap = true 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd recovery max chunk = 1048576 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd debug shutdown = true 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd debug op order = true 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd debug verify stray on activate = true 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd debug trim objects = true 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd open classes on start = true 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd debug pg log writeout = true 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd deep scrub update digest min age = 30 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: osd map max advance = 10 2026-03-20T18:26:06.552 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: journal zero on create = true 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: filestore ondisk finisher threads = 3 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: filestore apply finisher threads = 3 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: bdev debug aio = true 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: osd debug misdirected ops = true 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: bdev async discard = True 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: bdev enable discard = True 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: bluestore allocator = bitmap 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: bluestore block size = 96636764160 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: bluestore fsck on mount = True 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: debug bluefs = 1/20 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: debug bluestore = 1/20 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: debug ms = 1 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: debug osd = 20 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: debug rocksdb = 4/10 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: mon osd backfillfull_ratio = 0.85 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: mon osd full ratio = 0.9 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: mon osd nearfull ratio = 0.8 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: osd failsafe full ratio = 0.95 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: osd mclock iops capacity threshold hdd = 49000 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: osd objectstore = bluestore 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: osd shutdown pgref assert = True 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout:[mgr] 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: debug ms = 1 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: debug mgr = 20 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: debug mon = 20 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: debug auth = 20 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: mon reweight min pgs per osd = 4 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: mon reweight min bytes per osd = 10 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: mgr/telemetry/nag = false 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout:[mon] 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: debug ms = 1 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: debug mon = 20 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: debug paxos = 20 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: debug auth = 20 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: mon data avail warn = 5 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: mon mgr mkfs grace = 240 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: mon reweight min pgs per osd = 4 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: mon osd reporter subtree level = osd 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: mon osd prime pg temp = true 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: mon reweight min bytes per osd = 10 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: # rotate auth tickets quickly to exercise renewal paths 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: auth mon ticket ttl = 660 # 11m 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: auth service ticket ttl = 240 # 4m 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: # don't complain about insecure global_id in the test suite 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: mon_warn_on_insecure_global_id_reclaim = false 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: mon_warn_on_insecure_global_id_reclaim_allowed = false 2026-03-20T18:26:06.553 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm00.stdout: # 1m isn't quite enough 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm00.stdout: mon_down_mkfs_grace = 2m 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm00.stdout: mon_warn_on_filestore_osds = false 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm00.stdout:[client] 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm00.stdout: rgw cache enabled = true 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm00.stdout: rgw enable ops log = true 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm00.stdout: rgw enable usage log = true 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm00.stdout: log file = /var/log/ceph/$cluster-$name.$pid.log 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm00.stdout: admin socket = /var/run/ceph/$cluster-$name.$pid.asok 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm00.stdout: debug rgw = 20 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm00.stdout: debug rgw dedup = 20 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm00.stdout: setgroup = ceph 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm00.stdout: setuser = ceph 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm00.stdout:[mon.a] 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm00.stdout:[mon.c] 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm00.stdout:[mon.b] 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm02.stdout:[global] 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm02.stdout: chdir = "" 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm02.stdout: pid file = /var/run/ceph/$cluster-$name.pid 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm02.stdout: auth supported = cephx 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm02.stdout: filestore xattr use omap = true 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm02.stdout: mon clock drift allowed = 1.000 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm02.stdout: osd crush chooseleaf type = 0 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm02.stdout: auth debug = true 2026-03-20T18:26:06.554 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: ms die on old message = true 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: ms die on bug = true 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: mon max pg per osd = 10000 # >= luminous 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: mon pg warn max object skew = 0 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: # disable pg_autoscaler by default for new pools 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd_pool_default_pg_autoscale_mode = off 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd pool default size = 2 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: mon osd allow primary affinity = true 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: mon osd allow pg remap = true 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: mon warn on legacy crush tunables = false 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: mon warn on crush straw calc version zero = false 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: mon warn on no sortbitwise = false 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: mon warn on osd down out interval zero = false 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: mon warn on too few osds = false 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: mon_warn_on_pool_pg_num_not_power_of_two = false 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: mon_warn_on_pool_no_redundancy = false 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: mon_allow_pool_size_one = true 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd pool default erasure code profile = plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd default data pool replay window = 5 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: mon allow pool delete = true 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: mon cluster log file level = debug 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: debug asserts on shutdown = true 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: mon health detail to clog = false 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: mon host = "192.168.123.100,[v2:192.168.123.100:3301,v1:192.168.123.100:6790],192.168.123.102" 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd_max_pg_log_entries = 10 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd_min_pg_log_entries = 10 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: fsid = e1f9fff1-39d0-4146-abc7-3b481a096f4f 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout:[osd] 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd journal size = 100 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd scrub load threshold = 5.0 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd scrub max interval = 600 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd mclock profile = high_recovery_ops 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd mclock skip benchmark = true 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd recover clone overlap = true 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd recovery max chunk = 1048576 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd debug shutdown = true 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd debug op order = true 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd debug verify stray on activate = true 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd debug trim objects = true 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd open classes on start = true 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd debug pg log writeout = true 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd deep scrub update digest min age = 30 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: osd map max advance = 10 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: journal zero on create = true 2026-03-20T18:26:06.555 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: filestore ondisk finisher threads = 3 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: filestore apply finisher threads = 3 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: bdev debug aio = true 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: osd debug misdirected ops = true 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: bdev async discard = True 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: bdev enable discard = True 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: bluestore allocator = bitmap 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: bluestore block size = 96636764160 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: bluestore fsck on mount = True 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: debug bluefs = 1/20 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: debug bluestore = 1/20 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: debug ms = 1 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: debug osd = 20 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: debug rocksdb = 4/10 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: mon osd backfillfull_ratio = 0.85 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: mon osd full ratio = 0.9 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: mon osd nearfull ratio = 0.8 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: osd failsafe full ratio = 0.95 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: osd mclock iops capacity threshold hdd = 49000 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: osd objectstore = bluestore 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: osd shutdown pgref assert = True 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout:[mgr] 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: debug ms = 1 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: debug mgr = 20 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: debug mon = 20 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: debug auth = 20 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: mon reweight min pgs per osd = 4 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: mon reweight min bytes per osd = 10 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: mgr/telemetry/nag = false 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout:[mon] 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: debug ms = 1 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: debug mon = 20 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: debug paxos = 20 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: debug auth = 20 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: mon data avail warn = 5 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: mon mgr mkfs grace = 240 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: mon reweight min pgs per osd = 4 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: mon osd reporter subtree level = osd 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: mon osd prime pg temp = true 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: mon reweight min bytes per osd = 10 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: # rotate auth tickets quickly to exercise renewal paths 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: auth mon ticket ttl = 660 # 11m 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: auth service ticket ttl = 240 # 4m 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: # don't complain about insecure global_id in the test suite 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: mon_warn_on_insecure_global_id_reclaim = false 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: mon_warn_on_insecure_global_id_reclaim_allowed = false 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: # 1m isn't quite enough 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: mon_down_mkfs_grace = 2m 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: mon_warn_on_filestore_osds = false 2026-03-20T18:26:06.556 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm02.stdout:[client] 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm02.stdout: rgw cache enabled = true 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm02.stdout: rgw enable ops log = true 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm02.stdout: rgw enable usage log = true 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm02.stdout: log file = /var/log/ceph/$cluster-$name.$pid.log 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm02.stdout: admin socket = /var/run/ceph/$cluster-$name.$pid.asok 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm02.stdout: debug rgw = 20 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm02.stdout: debug rgw dedup = 20 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm02.stdout: setgroup = ceph 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm02.stdout: setuser = ceph 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm02.stdout:[mon.a] 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm02.stdout:[mon.c] 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm02.stdout:[mon.b] 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout:[global] 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: chdir = "" 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: pid file = /var/run/ceph/$cluster-$name.pid 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: auth supported = cephx 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: filestore xattr use omap = true 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: mon clock drift allowed = 1.000 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: osd crush chooseleaf type = 0 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: auth debug = true 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: ms die on old message = true 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: ms die on bug = true 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: mon max pg per osd = 10000 # >= luminous 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: mon pg warn max object skew = 0 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: # disable pg_autoscaler by default for new pools 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: osd_pool_default_pg_autoscale_mode = off 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: osd pool default size = 2 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: mon osd allow primary affinity = true 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: mon osd allow pg remap = true 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: mon warn on legacy crush tunables = false 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: mon warn on crush straw calc version zero = false 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: mon warn on no sortbitwise = false 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: mon warn on osd down out interval zero = false 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: mon warn on too few osds = false 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: mon_warn_on_pool_pg_num_not_power_of_two = false 2026-03-20T18:26:06.557 INFO:teuthology.orchestra.run.vm05.stdout: mon_warn_on_pool_no_redundancy = false 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: mon_allow_pool_size_one = true 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: osd pool default erasure code profile = plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: osd default data pool replay window = 5 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: mon allow pool delete = true 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: mon cluster log file level = debug 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: debug asserts on shutdown = true 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: mon health detail to clog = false 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: mon host = "192.168.123.100,[v2:192.168.123.100:3301,v1:192.168.123.100:6790],192.168.123.102" 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: osd_max_pg_log_entries = 10 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: osd_min_pg_log_entries = 10 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: fsid = e1f9fff1-39d0-4146-abc7-3b481a096f4f 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout:[osd] 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: osd journal size = 100 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: osd scrub load threshold = 5.0 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: osd scrub max interval = 600 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: osd mclock profile = high_recovery_ops 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: osd mclock skip benchmark = true 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: osd recover clone overlap = true 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: osd recovery max chunk = 1048576 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: osd debug shutdown = true 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: osd debug op order = true 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: osd debug verify stray on activate = true 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: osd debug trim objects = true 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: osd open classes on start = true 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: osd debug pg log writeout = true 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: osd deep scrub update digest min age = 30 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: osd map max advance = 10 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: journal zero on create = true 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: filestore ondisk finisher threads = 3 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: filestore apply finisher threads = 3 2026-03-20T18:26:06.558 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: bdev debug aio = true 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: osd debug misdirected ops = true 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: bdev async discard = True 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: bdev enable discard = True 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: bluestore allocator = bitmap 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: bluestore block size = 96636764160 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: bluestore fsck on mount = True 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: debug bluefs = 1/20 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: debug bluestore = 1/20 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: debug ms = 1 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: debug osd = 20 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: debug rocksdb = 4/10 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: mon osd backfillfull_ratio = 0.85 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: mon osd full ratio = 0.9 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: mon osd nearfull ratio = 0.8 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: osd failsafe full ratio = 0.95 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: osd mclock iops capacity threshold hdd = 49000 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: osd objectstore = bluestore 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: osd shutdown pgref assert = True 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout:[mgr] 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: debug ms = 1 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: debug mgr = 20 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: debug mon = 20 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: debug auth = 20 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: mon reweight min pgs per osd = 4 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: mon reweight min bytes per osd = 10 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: mgr/telemetry/nag = false 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout:[mon] 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: debug ms = 1 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: debug mon = 20 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: debug paxos = 20 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: debug auth = 20 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: mon data avail warn = 5 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: mon mgr mkfs grace = 240 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: mon reweight min pgs per osd = 4 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: mon osd reporter subtree level = osd 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: mon osd prime pg temp = true 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: mon reweight min bytes per osd = 10 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: # rotate auth tickets quickly to exercise renewal paths 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: auth mon ticket ttl = 660 # 11m 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: auth service ticket ttl = 240 # 4m 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: # don't complain about insecure global_id in the test suite 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: mon_warn_on_insecure_global_id_reclaim = false 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: mon_warn_on_insecure_global_id_reclaim_allowed = false 2026-03-20T18:26:06.559 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.560 INFO:teuthology.orchestra.run.vm05.stdout: # 1m isn't quite enough 2026-03-20T18:26:06.560 INFO:teuthology.orchestra.run.vm05.stdout: mon_down_mkfs_grace = 2m 2026-03-20T18:26:06.560 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.560 INFO:teuthology.orchestra.run.vm05.stdout: mon_warn_on_filestore_osds = false 2026-03-20T18:26:06.560 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T18:26:06.560 INFO:teuthology.orchestra.run.vm05.stdout:[client] 2026-03-20T18:26:06.560 INFO:teuthology.orchestra.run.vm05.stdout: rgw cache enabled = true 2026-03-20T18:26:06.560 INFO:teuthology.orchestra.run.vm05.stdout: rgw enable ops log = true 2026-03-20T18:26:06.560 INFO:teuthology.orchestra.run.vm05.stdout: rgw enable usage log = true 2026-03-20T18:26:06.560 INFO:teuthology.orchestra.run.vm05.stdout: log file = /var/log/ceph/$cluster-$name.$pid.log 2026-03-20T18:26:06.560 INFO:teuthology.orchestra.run.vm05.stdout: admin socket = /var/run/ceph/$cluster-$name.$pid.asok 2026-03-20T18:26:06.560 INFO:teuthology.orchestra.run.vm05.stdout: debug rgw = 20 2026-03-20T18:26:06.560 INFO:teuthology.orchestra.run.vm05.stdout: debug rgw dedup = 20 2026-03-20T18:26:06.560 INFO:teuthology.orchestra.run.vm05.stdout: setgroup = ceph 2026-03-20T18:26:06.560 INFO:teuthology.orchestra.run.vm05.stdout: setuser = ceph 2026-03-20T18:26:06.560 INFO:teuthology.orchestra.run.vm05.stdout:[mon.a] 2026-03-20T18:26:06.560 INFO:teuthology.orchestra.run.vm05.stdout:[mon.c] 2026-03-20T18:26:06.560 INFO:teuthology.orchestra.run.vm05.stdout:[mon.b] 2026-03-20T18:26:06.568 INFO:tasks.ceph:Creating admin key on mon.a... 2026-03-20T18:26:06.568 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /etc/ceph/ceph.keyring 2026-03-20T18:26:06.648 INFO:tasks.ceph:Copying monmap to all nodes... 2026-03-20T18:26:06.648 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:26:06.648 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.keyring of=/dev/stdout 2026-03-20T18:26:06.662 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:26:06.663 DEBUG:teuthology.orchestra.run.vm00:> dd if=/home/ubuntu/cephtest/ceph.monmap of=/dev/stdout 2026-03-20T18:26:06.718 INFO:tasks.ceph:Sending monmap to node ubuntu@vm00.local 2026-03-20T18:26:06.718 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:26:06.718 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.keyring 2026-03-20T18:26:06.718 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-03-20T18:26:06.798 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:26:06.798 DEBUG:teuthology.orchestra.run.vm00:> dd of=/home/ubuntu/cephtest/ceph.monmap 2026-03-20T18:26:06.856 INFO:tasks.ceph:Sending monmap to node ubuntu@vm02.local 2026-03-20T18:26:06.856 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-20T18:26:06.856 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.keyring 2026-03-20T18:26:06.856 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-03-20T18:26:06.893 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-20T18:26:06.893 DEBUG:teuthology.orchestra.run.vm02:> dd of=/home/ubuntu/cephtest/ceph.monmap 2026-03-20T18:26:06.947 INFO:tasks.ceph:Sending monmap to node ubuntu@vm05.local 2026-03-20T18:26:06.947 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-20T18:26:06.948 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.keyring 2026-03-20T18:26:06.948 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-03-20T18:26:06.984 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-20T18:26:06.984 DEBUG:teuthology.orchestra.run.vm05:> dd of=/home/ubuntu/cephtest/ceph.monmap 2026-03-20T18:26:07.040 INFO:tasks.ceph:Setting up mon nodes... 2026-03-20T18:26:07.040 INFO:tasks.ceph:Setting up mgr nodes... 2026-03-20T18:26:07.041 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/mgr/ceph-y && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=mgr.y /var/lib/ceph/mgr/ceph-y/keyring 2026-03-20T18:26:07.092 INFO:teuthology.orchestra.run.vm00.stdout:creating /var/lib/ceph/mgr/ceph-y/keyring 2026-03-20T18:26:07.094 DEBUG:teuthology.orchestra.run.vm02:> sudo mkdir -p /var/lib/ceph/mgr/ceph-x && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=mgr.x /var/lib/ceph/mgr/ceph-x/keyring 2026-03-20T18:26:07.148 INFO:teuthology.orchestra.run.vm02.stdout:creating /var/lib/ceph/mgr/ceph-x/keyring 2026-03-20T18:26:07.150 INFO:tasks.ceph:Setting up mds nodes... 2026-03-20T18:26:07.150 INFO:tasks.ceph_client:Setting up client nodes... 2026-03-20T18:26:07.151 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=client.0 /etc/ceph/ceph.client.0.keyring && sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-20T18:26:07.189 INFO:teuthology.orchestra.run.vm00.stdout:creating /etc/ceph/ceph.client.0.keyring 2026-03-20T18:26:07.201 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=client.1 /etc/ceph/ceph.client.1.keyring && sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-20T18:26:07.243 INFO:teuthology.orchestra.run.vm02.stdout:creating /etc/ceph/ceph.client.1.keyring 2026-03-20T18:26:07.257 DEBUG:teuthology.orchestra.run.vm05:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=client.2 /etc/ceph/ceph.client.2.keyring && sudo chmod 0644 /etc/ceph/ceph.client.2.keyring 2026-03-20T18:26:07.302 INFO:teuthology.orchestra.run.vm05.stdout:creating /etc/ceph/ceph.client.2.keyring 2026-03-20T18:26:07.316 INFO:tasks.ceph:Running mkfs on osd nodes... 2026-03-20T18:26:07.316 INFO:tasks.ceph:ctx.disk_config.remote_to_roles_to_dev: {Remote(name='ubuntu@vm00.local'): {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'}, Remote(name='ubuntu@vm02.local'): {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'}} 2026-03-20T18:26:07.316 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/osd/ceph-0 2026-03-20T18:26:07.345 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'} 2026-03-20T18:26:07.345 INFO:tasks.ceph:role: osd.0 2026-03-20T18:26:07.345 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_1 on ubuntu@vm00.local 2026-03-20T18:26:07.345 DEBUG:teuthology.orchestra.run.vm00:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_1 2026-03-20T18:26:07.418 INFO:teuthology.orchestra.run.vm00.stdout:meta-data=/dev/vg_nvme/lv_1 isize=2048 agcount=4, agsize=1310464 blks 2026-03-20T18:26:07.418 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-20T18:26:07.418 INFO:teuthology.orchestra.run.vm00.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-20T18:26:07.418 INFO:teuthology.orchestra.run.vm00.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-20T18:26:07.418 INFO:teuthology.orchestra.run.vm00.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-20T18:26:07.418 INFO:teuthology.orchestra.run.vm00.stdout: = sunit=0 swidth=0 blks 2026-03-20T18:26:07.418 INFO:teuthology.orchestra.run.vm00.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-20T18:26:07.418 INFO:teuthology.orchestra.run.vm00.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-20T18:26:07.418 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-20T18:26:07.418 INFO:teuthology.orchestra.run.vm00.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-20T18:26:07.424 INFO:teuthology.orchestra.run.vm00.stdout:Discarding blocks...Done. 2026-03-20T18:26:07.426 INFO:tasks.ceph:mount /dev/vg_nvme/lv_1 on ubuntu@vm00.local -o noatime 2026-03-20T18:26:07.426 DEBUG:teuthology.orchestra.run.vm00:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_1 /var/lib/ceph/osd/ceph-0 2026-03-20T18:26:07.499 DEBUG:teuthology.orchestra.run.vm00:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-0 2026-03-20T18:26:07.570 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/osd/ceph-1 2026-03-20T18:26:07.637 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'} 2026-03-20T18:26:07.637 INFO:tasks.ceph:role: osd.1 2026-03-20T18:26:07.637 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_2 on ubuntu@vm00.local 2026-03-20T18:26:07.637 DEBUG:teuthology.orchestra.run.vm00:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_2 2026-03-20T18:26:07.704 INFO:teuthology.orchestra.run.vm00.stdout:meta-data=/dev/vg_nvme/lv_2 isize=2048 agcount=4, agsize=1310464 blks 2026-03-20T18:26:07.704 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-20T18:26:07.704 INFO:teuthology.orchestra.run.vm00.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-20T18:26:07.704 INFO:teuthology.orchestra.run.vm00.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-20T18:26:07.704 INFO:teuthology.orchestra.run.vm00.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-20T18:26:07.704 INFO:teuthology.orchestra.run.vm00.stdout: = sunit=0 swidth=0 blks 2026-03-20T18:26:07.704 INFO:teuthology.orchestra.run.vm00.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-20T18:26:07.704 INFO:teuthology.orchestra.run.vm00.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-20T18:26:07.704 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-20T18:26:07.704 INFO:teuthology.orchestra.run.vm00.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-20T18:26:07.708 INFO:teuthology.orchestra.run.vm00.stdout:Discarding blocks...Done. 2026-03-20T18:26:07.711 INFO:tasks.ceph:mount /dev/vg_nvme/lv_2 on ubuntu@vm00.local -o noatime 2026-03-20T18:26:07.711 DEBUG:teuthology.orchestra.run.vm00:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_2 /var/lib/ceph/osd/ceph-1 2026-03-20T18:26:07.783 DEBUG:teuthology.orchestra.run.vm00:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-1 2026-03-20T18:26:07.853 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/osd/ceph-2 2026-03-20T18:26:07.917 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'} 2026-03-20T18:26:07.917 INFO:tasks.ceph:role: osd.2 2026-03-20T18:26:07.917 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_3 on ubuntu@vm00.local 2026-03-20T18:26:07.917 DEBUG:teuthology.orchestra.run.vm00:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_3 2026-03-20T18:26:07.982 INFO:teuthology.orchestra.run.vm00.stdout:meta-data=/dev/vg_nvme/lv_3 isize=2048 agcount=4, agsize=1310464 blks 2026-03-20T18:26:07.982 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-20T18:26:07.982 INFO:teuthology.orchestra.run.vm00.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-20T18:26:07.982 INFO:teuthology.orchestra.run.vm00.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-20T18:26:07.982 INFO:teuthology.orchestra.run.vm00.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-20T18:26:07.982 INFO:teuthology.orchestra.run.vm00.stdout: = sunit=0 swidth=0 blks 2026-03-20T18:26:07.982 INFO:teuthology.orchestra.run.vm00.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-20T18:26:07.982 INFO:teuthology.orchestra.run.vm00.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-20T18:26:07.982 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-20T18:26:07.982 INFO:teuthology.orchestra.run.vm00.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-20T18:26:07.986 INFO:teuthology.orchestra.run.vm00.stdout:Discarding blocks...Done. 2026-03-20T18:26:07.988 INFO:tasks.ceph:mount /dev/vg_nvme/lv_3 on ubuntu@vm00.local -o noatime 2026-03-20T18:26:07.989 DEBUG:teuthology.orchestra.run.vm00:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_3 /var/lib/ceph/osd/ceph-2 2026-03-20T18:26:08.062 DEBUG:teuthology.orchestra.run.vm00:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-2 2026-03-20T18:26:08.130 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/osd/ceph-3 2026-03-20T18:26:08.196 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3', 'osd.3': '/dev/vg_nvme/lv_4'} 2026-03-20T18:26:08.196 INFO:tasks.ceph:role: osd.3 2026-03-20T18:26:08.196 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_4 on ubuntu@vm00.local 2026-03-20T18:26:08.196 DEBUG:teuthology.orchestra.run.vm00:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_4 2026-03-20T18:26:08.259 INFO:teuthology.orchestra.run.vm00.stdout:meta-data=/dev/vg_nvme/lv_4 isize=2048 agcount=4, agsize=1310464 blks 2026-03-20T18:26:08.260 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-20T18:26:08.260 INFO:teuthology.orchestra.run.vm00.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-20T18:26:08.260 INFO:teuthology.orchestra.run.vm00.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-20T18:26:08.260 INFO:teuthology.orchestra.run.vm00.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-20T18:26:08.260 INFO:teuthology.orchestra.run.vm00.stdout: = sunit=0 swidth=0 blks 2026-03-20T18:26:08.260 INFO:teuthology.orchestra.run.vm00.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-20T18:26:08.260 INFO:teuthology.orchestra.run.vm00.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-20T18:26:08.260 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-20T18:26:08.260 INFO:teuthology.orchestra.run.vm00.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-20T18:26:08.264 INFO:teuthology.orchestra.run.vm00.stdout:Discarding blocks...Done. 2026-03-20T18:26:08.267 INFO:tasks.ceph:mount /dev/vg_nvme/lv_4 on ubuntu@vm00.local -o noatime 2026-03-20T18:26:08.267 DEBUG:teuthology.orchestra.run.vm00:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_4 /var/lib/ceph/osd/ceph-3 2026-03-20T18:26:08.337 DEBUG:teuthology.orchestra.run.vm00:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-3 2026-03-20T18:26:08.405 DEBUG:teuthology.orchestra.run.vm00:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-20T18:26:08.488 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:08.486+0000 7f252cb52900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-0/keyring: can't open /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory 2026-03-20T18:26:08.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:08.487+0000 7f252cb52900 -1 created new key in keyring /var/lib/ceph/osd/ceph-0/keyring 2026-03-20T18:26:08.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:08.487+0000 7f252cb52900 -1 bdev(0x56169d45f800 /var/lib/ceph/osd/ceph-0/block) open stat got: (1) Operation not permitted 2026-03-20T18:26:08.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:08.487+0000 7f252cb52900 -1 bluestore(/var/lib/ceph/osd/ceph-0) _read_fsid unparsable uuid 2026-03-20T18:26:09.150 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-20T18:26:09.219 DEBUG:teuthology.orchestra.run.vm00:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 1 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-20T18:26:09.298 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:09.297+0000 7f6346812900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-1/keyring: can't open /var/lib/ceph/osd/ceph-1/keyring: (2) No such file or directory 2026-03-20T18:26:09.299 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:09.297+0000 7f6346812900 -1 created new key in keyring /var/lib/ceph/osd/ceph-1/keyring 2026-03-20T18:26:09.299 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:09.297+0000 7f6346812900 -1 bdev(0x557f912db800 /var/lib/ceph/osd/ceph-1/block) open stat got: (1) Operation not permitted 2026-03-20T18:26:09.299 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:09.297+0000 7f6346812900 -1 bluestore(/var/lib/ceph/osd/ceph-1) _read_fsid unparsable uuid 2026-03-20T18:26:09.986 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-20T18:26:10.057 DEBUG:teuthology.orchestra.run.vm00:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 2 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-20T18:26:10.140 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:10.138+0000 7fd3d375a900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-2/keyring: can't open /var/lib/ceph/osd/ceph-2/keyring: (2) No such file or directory 2026-03-20T18:26:10.140 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:10.138+0000 7fd3d375a900 -1 created new key in keyring /var/lib/ceph/osd/ceph-2/keyring 2026-03-20T18:26:10.140 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:10.138+0000 7fd3d375a900 -1 bdev(0x55656d027800 /var/lib/ceph/osd/ceph-2/block) open stat got: (1) Operation not permitted 2026-03-20T18:26:10.140 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:10.139+0000 7fd3d375a900 -1 bluestore(/var/lib/ceph/osd/ceph-2) _read_fsid unparsable uuid 2026-03-20T18:26:10.820 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-20T18:26:10.850 DEBUG:teuthology.orchestra.run.vm00:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 3 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-20T18:26:10.935 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:10.934+0000 7f738c895900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-3/keyring: can't open /var/lib/ceph/osd/ceph-3/keyring: (2) No such file or directory 2026-03-20T18:26:10.936 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:10.934+0000 7f738c895900 -1 created new key in keyring /var/lib/ceph/osd/ceph-3/keyring 2026-03-20T18:26:10.936 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:10.934+0000 7f738c895900 -1 bdev(0x55ed7e4ff800 /var/lib/ceph/osd/ceph-3/block) open stat got: (1) Operation not permitted 2026-03-20T18:26:10.936 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:10.934+0000 7f738c895900 -1 bluestore(/var/lib/ceph/osd/ceph-3) _read_fsid unparsable uuid 2026-03-20T18:26:11.697 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-20T18:26:11.724 DEBUG:teuthology.orchestra.run.vm02:> sudo mkdir -p /var/lib/ceph/osd/ceph-4 2026-03-20T18:26:11.751 INFO:tasks.ceph:roles_to_devs: {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'} 2026-03-20T18:26:11.757 INFO:tasks.ceph:role: osd.4 2026-03-20T18:26:11.757 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_1 on ubuntu@vm02.local 2026-03-20T18:26:11.757 DEBUG:teuthology.orchestra.run.vm02:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_1 2026-03-20T18:26:11.819 INFO:teuthology.orchestra.run.vm02.stdout:meta-data=/dev/vg_nvme/lv_1 isize=2048 agcount=4, agsize=1310464 blks 2026-03-20T18:26:11.819 INFO:teuthology.orchestra.run.vm02.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-20T18:26:11.819 INFO:teuthology.orchestra.run.vm02.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-20T18:26:11.819 INFO:teuthology.orchestra.run.vm02.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-20T18:26:11.819 INFO:teuthology.orchestra.run.vm02.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-20T18:26:11.819 INFO:teuthology.orchestra.run.vm02.stdout: = sunit=0 swidth=0 blks 2026-03-20T18:26:11.819 INFO:teuthology.orchestra.run.vm02.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-20T18:26:11.819 INFO:teuthology.orchestra.run.vm02.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-20T18:26:11.819 INFO:teuthology.orchestra.run.vm02.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-20T18:26:11.819 INFO:teuthology.orchestra.run.vm02.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-20T18:26:11.828 INFO:teuthology.orchestra.run.vm02.stdout:Discarding blocks...Done. 2026-03-20T18:26:11.831 INFO:tasks.ceph:mount /dev/vg_nvme/lv_1 on ubuntu@vm02.local -o noatime 2026-03-20T18:26:11.831 DEBUG:teuthology.orchestra.run.vm02:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_1 /var/lib/ceph/osd/ceph-4 2026-03-20T18:26:11.901 DEBUG:teuthology.orchestra.run.vm02:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-4 2026-03-20T18:26:11.969 DEBUG:teuthology.orchestra.run.vm02:> sudo mkdir -p /var/lib/ceph/osd/ceph-5 2026-03-20T18:26:12.038 INFO:tasks.ceph:roles_to_devs: {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'} 2026-03-20T18:26:12.038 INFO:tasks.ceph:role: osd.5 2026-03-20T18:26:12.038 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_2 on ubuntu@vm02.local 2026-03-20T18:26:12.038 DEBUG:teuthology.orchestra.run.vm02:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_2 2026-03-20T18:26:12.103 INFO:teuthology.orchestra.run.vm02.stdout:meta-data=/dev/vg_nvme/lv_2 isize=2048 agcount=4, agsize=1310464 blks 2026-03-20T18:26:12.104 INFO:teuthology.orchestra.run.vm02.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-20T18:26:12.104 INFO:teuthology.orchestra.run.vm02.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-20T18:26:12.104 INFO:teuthology.orchestra.run.vm02.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-20T18:26:12.104 INFO:teuthology.orchestra.run.vm02.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-20T18:26:12.104 INFO:teuthology.orchestra.run.vm02.stdout: = sunit=0 swidth=0 blks 2026-03-20T18:26:12.104 INFO:teuthology.orchestra.run.vm02.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-20T18:26:12.104 INFO:teuthology.orchestra.run.vm02.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-20T18:26:12.104 INFO:teuthology.orchestra.run.vm02.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-20T18:26:12.104 INFO:teuthology.orchestra.run.vm02.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-20T18:26:12.108 INFO:teuthology.orchestra.run.vm02.stdout:Discarding blocks...Done. 2026-03-20T18:26:12.111 INFO:tasks.ceph:mount /dev/vg_nvme/lv_2 on ubuntu@vm02.local -o noatime 2026-03-20T18:26:12.111 DEBUG:teuthology.orchestra.run.vm02:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_2 /var/lib/ceph/osd/ceph-5 2026-03-20T18:26:12.183 DEBUG:teuthology.orchestra.run.vm02:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-5 2026-03-20T18:26:12.250 DEBUG:teuthology.orchestra.run.vm02:> sudo mkdir -p /var/lib/ceph/osd/ceph-6 2026-03-20T18:26:12.316 INFO:tasks.ceph:roles_to_devs: {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'} 2026-03-20T18:26:12.316 INFO:tasks.ceph:role: osd.6 2026-03-20T18:26:12.316 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_3 on ubuntu@vm02.local 2026-03-20T18:26:12.316 DEBUG:teuthology.orchestra.run.vm02:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_3 2026-03-20T18:26:12.381 INFO:teuthology.orchestra.run.vm02.stdout:meta-data=/dev/vg_nvme/lv_3 isize=2048 agcount=4, agsize=1310464 blks 2026-03-20T18:26:12.381 INFO:teuthology.orchestra.run.vm02.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-20T18:26:12.381 INFO:teuthology.orchestra.run.vm02.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-20T18:26:12.381 INFO:teuthology.orchestra.run.vm02.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-20T18:26:12.381 INFO:teuthology.orchestra.run.vm02.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-20T18:26:12.381 INFO:teuthology.orchestra.run.vm02.stdout: = sunit=0 swidth=0 blks 2026-03-20T18:26:12.381 INFO:teuthology.orchestra.run.vm02.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-20T18:26:12.381 INFO:teuthology.orchestra.run.vm02.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-20T18:26:12.381 INFO:teuthology.orchestra.run.vm02.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-20T18:26:12.381 INFO:teuthology.orchestra.run.vm02.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-20T18:26:12.385 INFO:teuthology.orchestra.run.vm02.stdout:Discarding blocks...Done. 2026-03-20T18:26:12.387 INFO:tasks.ceph:mount /dev/vg_nvme/lv_3 on ubuntu@vm02.local -o noatime 2026-03-20T18:26:12.387 DEBUG:teuthology.orchestra.run.vm02:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_3 /var/lib/ceph/osd/ceph-6 2026-03-20T18:26:12.461 DEBUG:teuthology.orchestra.run.vm02:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-6 2026-03-20T18:26:12.529 DEBUG:teuthology.orchestra.run.vm02:> sudo mkdir -p /var/lib/ceph/osd/ceph-7 2026-03-20T18:26:12.595 INFO:tasks.ceph:roles_to_devs: {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2', 'osd.6': '/dev/vg_nvme/lv_3', 'osd.7': '/dev/vg_nvme/lv_4'} 2026-03-20T18:26:12.595 INFO:tasks.ceph:role: osd.7 2026-03-20T18:26:12.595 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_4 on ubuntu@vm02.local 2026-03-20T18:26:12.595 DEBUG:teuthology.orchestra.run.vm02:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_4 2026-03-20T18:26:12.660 INFO:teuthology.orchestra.run.vm02.stdout:meta-data=/dev/vg_nvme/lv_4 isize=2048 agcount=4, agsize=1310464 blks 2026-03-20T18:26:12.660 INFO:teuthology.orchestra.run.vm02.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-20T18:26:12.660 INFO:teuthology.orchestra.run.vm02.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-20T18:26:12.661 INFO:teuthology.orchestra.run.vm02.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-20T18:26:12.661 INFO:teuthology.orchestra.run.vm02.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-20T18:26:12.661 INFO:teuthology.orchestra.run.vm02.stdout: = sunit=0 swidth=0 blks 2026-03-20T18:26:12.661 INFO:teuthology.orchestra.run.vm02.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-20T18:26:12.661 INFO:teuthology.orchestra.run.vm02.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-20T18:26:12.661 INFO:teuthology.orchestra.run.vm02.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-20T18:26:12.661 INFO:teuthology.orchestra.run.vm02.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-20T18:26:12.665 INFO:teuthology.orchestra.run.vm02.stdout:Discarding blocks...Done. 2026-03-20T18:26:12.667 INFO:tasks.ceph:mount /dev/vg_nvme/lv_4 on ubuntu@vm02.local -o noatime 2026-03-20T18:26:12.667 DEBUG:teuthology.orchestra.run.vm02:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_4 /var/lib/ceph/osd/ceph-7 2026-03-20T18:26:12.737 DEBUG:teuthology.orchestra.run.vm02:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-7 2026-03-20T18:26:12.807 DEBUG:teuthology.orchestra.run.vm02:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 4 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-20T18:26:12.890 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:12.889+0000 7f3747dc1900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-4/keyring: can't open /var/lib/ceph/osd/ceph-4/keyring: (2) No such file or directory 2026-03-20T18:26:12.890 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:12.889+0000 7f3747dc1900 -1 created new key in keyring /var/lib/ceph/osd/ceph-4/keyring 2026-03-20T18:26:12.890 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:12.889+0000 7f3747dc1900 -1 bdev(0x5651f4303800 /var/lib/ceph/osd/ceph-4/block) open stat got: (1) Operation not permitted 2026-03-20T18:26:12.890 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:12.889+0000 7f3747dc1900 -1 bluestore(/var/lib/ceph/osd/ceph-4) _read_fsid unparsable uuid 2026-03-20T18:26:13.669 DEBUG:teuthology.orchestra.run.vm02:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-20T18:26:13.736 DEBUG:teuthology.orchestra.run.vm02:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 5 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-20T18:26:13.815 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:13.815+0000 7f1e06fe0900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-5/keyring: can't open /var/lib/ceph/osd/ceph-5/keyring: (2) No such file or directory 2026-03-20T18:26:13.816 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:13.815+0000 7f1e06fe0900 -1 created new key in keyring /var/lib/ceph/osd/ceph-5/keyring 2026-03-20T18:26:13.816 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:13.815+0000 7f1e06fe0900 -1 bdev(0x560faf173800 /var/lib/ceph/osd/ceph-5/block) open stat got: (1) Operation not permitted 2026-03-20T18:26:13.816 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:13.815+0000 7f1e06fe0900 -1 bluestore(/var/lib/ceph/osd/ceph-5) _read_fsid unparsable uuid 2026-03-20T18:26:14.505 DEBUG:teuthology.orchestra.run.vm02:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-20T18:26:14.573 DEBUG:teuthology.orchestra.run.vm02:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 6 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-20T18:26:14.651 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:14.650+0000 7f0473a9f900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-6/keyring: can't open /var/lib/ceph/osd/ceph-6/keyring: (2) No such file or directory 2026-03-20T18:26:14.651 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:14.651+0000 7f0473a9f900 -1 created new key in keyring /var/lib/ceph/osd/ceph-6/keyring 2026-03-20T18:26:14.651 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:14.651+0000 7f0473a9f900 -1 bdev(0x55ca86f8b800 /var/lib/ceph/osd/ceph-6/block) open stat got: (1) Operation not permitted 2026-03-20T18:26:14.651 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:14.651+0000 7f0473a9f900 -1 bluestore(/var/lib/ceph/osd/ceph-6) _read_fsid unparsable uuid 2026-03-20T18:26:15.349 DEBUG:teuthology.orchestra.run.vm02:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-6 2026-03-20T18:26:15.417 DEBUG:teuthology.orchestra.run.vm02:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 7 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-20T18:26:15.503 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:15.502+0000 7fa03d61e900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-7/keyring: can't open /var/lib/ceph/osd/ceph-7/keyring: (2) No such file or directory 2026-03-20T18:26:15.503 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:15.503+0000 7fa03d61e900 -1 created new key in keyring /var/lib/ceph/osd/ceph-7/keyring 2026-03-20T18:26:15.503 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:15.503+0000 7fa03d61e900 -1 bdev(0x562f37aa1800 /var/lib/ceph/osd/ceph-7/block) open stat got: (1) Operation not permitted 2026-03-20T18:26:15.503 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:15.503+0000 7fa03d61e900 -1 bluestore(/var/lib/ceph/osd/ceph-7) _read_fsid unparsable uuid 2026-03-20T18:26:16.254 DEBUG:teuthology.orchestra.run.vm02:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-7 2026-03-20T18:26:16.283 INFO:tasks.ceph:Reading keys from all nodes... 2026-03-20T18:26:16.283 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:26:16.283 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/mgr/ceph-y/keyring of=/dev/stdout 2026-03-20T18:26:16.308 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:26:16.308 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-0/keyring of=/dev/stdout 2026-03-20T18:26:16.368 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:26:16.368 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-1/keyring of=/dev/stdout 2026-03-20T18:26:16.430 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:26:16.430 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-2/keyring of=/dev/stdout 2026-03-20T18:26:16.492 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:26:16.492 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-3/keyring of=/dev/stdout 2026-03-20T18:26:16.559 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-20T18:26:16.559 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/var/lib/ceph/mgr/ceph-x/keyring of=/dev/stdout 2026-03-20T18:26:16.582 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-20T18:26:16.582 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/var/lib/ceph/osd/ceph-4/keyring of=/dev/stdout 2026-03-20T18:26:16.649 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-20T18:26:16.649 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/var/lib/ceph/osd/ceph-5/keyring of=/dev/stdout 2026-03-20T18:26:16.715 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-20T18:26:16.715 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/var/lib/ceph/osd/ceph-6/keyring of=/dev/stdout 2026-03-20T18:26:16.779 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-20T18:26:16.779 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/var/lib/ceph/osd/ceph-7/keyring of=/dev/stdout 2026-03-20T18:26:16.841 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:26:16.841 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.client.0.keyring of=/dev/stdout 2026-03-20T18:26:16.856 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-20T18:26:16.856 DEBUG:teuthology.orchestra.run.vm02:> dd if=/etc/ceph/ceph.client.1.keyring of=/dev/stdout 2026-03-20T18:26:16.896 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-20T18:26:16.896 DEBUG:teuthology.orchestra.run.vm05:> dd if=/etc/ceph/ceph.client.2.keyring of=/dev/stdout 2026-03-20T18:26:16.911 INFO:tasks.ceph:Adding keys to all mons... 2026-03-20T18:26:16.911 DEBUG:teuthology.orchestra.run.vm00:> sudo tee -a /etc/ceph/ceph.keyring 2026-03-20T18:26:16.912 DEBUG:teuthology.orchestra.run.vm02:> sudo tee -a /etc/ceph/ceph.keyring 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout:[mgr.y] 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout: key = AQA/kb1phLVzBRAABw6Zs96lE6ovrDVoidqWPw== 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout:[osd.0] 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBAkb1ptIkTHRAAczNggEvkQ63TZG3Nppg6gQ== 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout:[osd.1] 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBBkb1puVrEERAATxIIajD8QgQm3V+DIzar/g== 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout:[osd.2] 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBCkb1pI5JRCBAABqlT/AjYv1vcvNDN7BjfEg== 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout:[osd.3] 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBCkb1pXLe7NxAAdw4DV5VXNzpvJSwUzVjv+w== 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout:[mgr.x] 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout: key = AQA/kb1pHPHXCBAA2DJrHHaTDZm6zQEyZmLcQw== 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout:[osd.4] 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBEkb1puLwNNRAA7mliQpx47QWrKvp7QhZzLA== 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout:[osd.5] 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBFkb1pTuKiMBAA7hmZLKB83gqEbFDXcue/dQ== 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout:[osd.6] 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBGkb1puQ3XJhAAGBolBaIyGpw/mbTbOgCq4Q== 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout:[osd.7] 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBHkb1pDR8EHhAAszatWoZfULPvlKFBxmj/tg== 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout:[client.0] 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout: key = AQA/kb1pCeZCCxAAr8KhKPhe7QGNNzXDiy728A== 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout:[client.1] 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout: key = AQA/kb1p46aBDhAA7HfIcA7Lpx/yM6xvP6IVWw== 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout:[client.2] 2026-03-20T18:26:16.939 INFO:teuthology.orchestra.run.vm00.stdout: key = AQA/kb1pAbv1ERAAOTl770Enw9jWfimUCdLCfA== 2026-03-20T18:26:16.962 INFO:teuthology.orchestra.run.vm02.stdout:[mgr.y] 2026-03-20T18:26:16.962 INFO:teuthology.orchestra.run.vm02.stdout: key = AQA/kb1phLVzBRAABw6Zs96lE6ovrDVoidqWPw== 2026-03-20T18:26:16.962 INFO:teuthology.orchestra.run.vm02.stdout:[osd.0] 2026-03-20T18:26:16.962 INFO:teuthology.orchestra.run.vm02.stdout: key = AQBAkb1ptIkTHRAAczNggEvkQ63TZG3Nppg6gQ== 2026-03-20T18:26:16.962 INFO:teuthology.orchestra.run.vm02.stdout:[osd.1] 2026-03-20T18:26:16.962 INFO:teuthology.orchestra.run.vm02.stdout: key = AQBBkb1puVrEERAATxIIajD8QgQm3V+DIzar/g== 2026-03-20T18:26:16.962 INFO:teuthology.orchestra.run.vm02.stdout:[osd.2] 2026-03-20T18:26:16.962 INFO:teuthology.orchestra.run.vm02.stdout: key = AQBCkb1pI5JRCBAABqlT/AjYv1vcvNDN7BjfEg== 2026-03-20T18:26:16.962 INFO:teuthology.orchestra.run.vm02.stdout:[osd.3] 2026-03-20T18:26:16.962 INFO:teuthology.orchestra.run.vm02.stdout: key = AQBCkb1pXLe7NxAAdw4DV5VXNzpvJSwUzVjv+w== 2026-03-20T18:26:16.963 INFO:teuthology.orchestra.run.vm02.stdout:[mgr.x] 2026-03-20T18:26:16.963 INFO:teuthology.orchestra.run.vm02.stdout: key = AQA/kb1pHPHXCBAA2DJrHHaTDZm6zQEyZmLcQw== 2026-03-20T18:26:16.963 INFO:teuthology.orchestra.run.vm02.stdout:[osd.4] 2026-03-20T18:26:16.963 INFO:teuthology.orchestra.run.vm02.stdout: key = AQBEkb1puLwNNRAA7mliQpx47QWrKvp7QhZzLA== 2026-03-20T18:26:16.963 INFO:teuthology.orchestra.run.vm02.stdout:[osd.5] 2026-03-20T18:26:16.963 INFO:teuthology.orchestra.run.vm02.stdout: key = AQBFkb1pTuKiMBAA7hmZLKB83gqEbFDXcue/dQ== 2026-03-20T18:26:16.963 INFO:teuthology.orchestra.run.vm02.stdout:[osd.6] 2026-03-20T18:26:16.963 INFO:teuthology.orchestra.run.vm02.stdout: key = AQBGkb1puQ3XJhAAGBolBaIyGpw/mbTbOgCq4Q== 2026-03-20T18:26:16.963 INFO:teuthology.orchestra.run.vm02.stdout:[osd.7] 2026-03-20T18:26:16.963 INFO:teuthology.orchestra.run.vm02.stdout: key = AQBHkb1pDR8EHhAAszatWoZfULPvlKFBxmj/tg== 2026-03-20T18:26:16.963 INFO:teuthology.orchestra.run.vm02.stdout:[client.0] 2026-03-20T18:26:16.963 INFO:teuthology.orchestra.run.vm02.stdout: key = AQA/kb1pCeZCCxAAr8KhKPhe7QGNNzXDiy728A== 2026-03-20T18:26:16.963 INFO:teuthology.orchestra.run.vm02.stdout:[client.1] 2026-03-20T18:26:16.963 INFO:teuthology.orchestra.run.vm02.stdout: key = AQA/kb1p46aBDhAA7HfIcA7Lpx/yM6xvP6IVWw== 2026-03-20T18:26:16.963 INFO:teuthology.orchestra.run.vm02.stdout:[client.2] 2026-03-20T18:26:16.963 INFO:teuthology.orchestra.run.vm02.stdout: key = AQA/kb1pAbv1ERAAOTl770Enw9jWfimUCdLCfA== 2026-03-20T18:26:16.963 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=mgr.y --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 2026-03-20T18:26:16.982 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=mgr.y --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 2026-03-20T18:26:17.048 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.0 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T18:26:17.049 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.0 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T18:26:17.095 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.1 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T18:26:17.096 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.1 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T18:26:17.142 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.2 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T18:26:17.143 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.2 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T18:26:17.190 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.3 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T18:26:17.191 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.3 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T18:26:17.237 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=mgr.x --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 2026-03-20T18:26:17.239 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=mgr.x --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 2026-03-20T18:26:17.282 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.4 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T18:26:17.323 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.4 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T18:26:17.366 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.5 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T18:26:17.367 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.5 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T18:26:17.447 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.6 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T18:26:17.452 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.6 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T18:26:17.526 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.7 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T18:26:17.535 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.7 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T18:26:17.606 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.0 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-03-20T18:26:17.619 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.0 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-03-20T18:26:17.689 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.1 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-03-20T18:26:17.691 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.1 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-03-20T18:26:17.776 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.2 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-03-20T18:26:17.777 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.2 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-03-20T18:26:17.825 INFO:tasks.ceph:Running mkfs on mon nodes... 2026-03-20T18:26:17.825 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/mon/ceph-a 2026-03-20T18:26:17.851 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-mon --cluster ceph --mkfs -i a --monmap /home/ubuntu/cephtest/ceph.monmap --keyring /etc/ceph/ceph.keyring 2026-03-20T18:26:17.987 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/mon/ceph-a 2026-03-20T18:26:18.014 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/mon/ceph-c 2026-03-20T18:26:18.080 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-mon --cluster ceph --mkfs -i c --monmap /home/ubuntu/cephtest/ceph.monmap --keyring /etc/ceph/ceph.keyring 2026-03-20T18:26:18.174 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/mon/ceph-c 2026-03-20T18:26:18.199 DEBUG:teuthology.orchestra.run.vm02:> sudo mkdir -p /var/lib/ceph/mon/ceph-b 2026-03-20T18:26:18.224 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-mon --cluster ceph --mkfs -i b --monmap /home/ubuntu/cephtest/ceph.monmap --keyring /etc/ceph/ceph.keyring 2026-03-20T18:26:18.324 DEBUG:teuthology.orchestra.run.vm02:> sudo chown -R ceph:ceph /var/lib/ceph/mon/ceph-b 2026-03-20T18:26:18.349 DEBUG:teuthology.orchestra.run.vm00:> rm -- /home/ubuntu/cephtest/ceph.monmap 2026-03-20T18:26:18.351 DEBUG:teuthology.orchestra.run.vm02:> rm -- /home/ubuntu/cephtest/ceph.monmap 2026-03-20T18:26:18.404 INFO:tasks.ceph:Starting mon daemons in cluster ceph... 2026-03-20T18:26:18.405 INFO:tasks.ceph.mon.a:Restarting daemon 2026-03-20T18:26:18.405 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i a 2026-03-20T18:26:18.407 INFO:tasks.ceph.mon.a:Started 2026-03-20T18:26:18.407 INFO:tasks.ceph.mon.c:Restarting daemon 2026-03-20T18:26:18.407 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i c 2026-03-20T18:26:18.409 INFO:tasks.ceph.mon.c:Started 2026-03-20T18:26:18.409 INFO:tasks.ceph.mon.b:Restarting daemon 2026-03-20T18:26:18.409 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i b 2026-03-20T18:26:18.447 INFO:tasks.ceph.mon.b:Started 2026-03-20T18:26:18.447 INFO:tasks.ceph:Starting mgr daemons in cluster ceph... 2026-03-20T18:26:18.447 INFO:tasks.ceph.mgr.y:Restarting daemon 2026-03-20T18:26:18.447 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mgr -f --cluster ceph -i y 2026-03-20T18:26:18.449 INFO:tasks.ceph.mgr.y:Started 2026-03-20T18:26:18.449 INFO:tasks.ceph.mgr.x:Restarting daemon 2026-03-20T18:26:18.449 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mgr -f --cluster ceph -i x 2026-03-20T18:26:18.451 INFO:tasks.ceph.mgr.x:Started 2026-03-20T18:26:18.451 DEBUG:tasks.ceph:set 0 configs 2026-03-20T18:26:18.451 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph --cluster ceph config dump 2026-03-20T18:26:23.766 INFO:teuthology.orchestra.run.vm00.stdout:WHO MASK LEVEL OPTION VALUE RO 2026-03-20T18:26:23.782 INFO:tasks.ceph:Setting crush tunables to default 2026-03-20T18:26:23.782 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph --cluster ceph osd crush tunables default 2026-03-20T18:26:23.895 INFO:teuthology.orchestra.run.vm00.stderr:adjusted tunables profile to default 2026-03-20T18:26:23.907 INFO:tasks.ceph:check_enable_crimson: False 2026-03-20T18:26:23.907 INFO:tasks.ceph:Starting osd daemons in cluster ceph... 2026-03-20T18:26:23.907 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:26:23.907 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-0/fsid of=/dev/stdout 2026-03-20T18:26:23.936 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:26:23.936 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-1/fsid of=/dev/stdout 2026-03-20T18:26:24.003 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:26:24.003 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-2/fsid of=/dev/stdout 2026-03-20T18:26:24.069 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:26:24.069 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-3/fsid of=/dev/stdout 2026-03-20T18:26:24.135 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-20T18:26:24.135 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/var/lib/ceph/osd/ceph-4/fsid of=/dev/stdout 2026-03-20T18:26:24.157 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-20T18:26:24.157 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/var/lib/ceph/osd/ceph-5/fsid of=/dev/stdout 2026-03-20T18:26:24.218 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-20T18:26:24.218 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/var/lib/ceph/osd/ceph-6/fsid of=/dev/stdout 2026-03-20T18:26:24.278 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-20T18:26:24.278 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/var/lib/ceph/osd/ceph-7/fsid of=/dev/stdout 2026-03-20T18:26:24.340 DEBUG:teuthology.orchestra.run.vm02:> sudo ceph --cluster ceph osd new 4d30620c-8de8-4804-b27e-fead5a2c9a3b 0 2026-03-20T18:26:24.500 INFO:teuthology.orchestra.run.vm02.stdout:0 2026-03-20T18:26:24.509 DEBUG:teuthology.orchestra.run.vm02:> sudo ceph --cluster ceph osd new 69004132-8b19-4ed3-995c-8404da69c0bb 1 2026-03-20T18:26:24.544 INFO:tasks.ceph.mgr.y.vm00.stderr:/usr/lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-20T18:26:24.544 INFO:tasks.ceph.mgr.y.vm00.stderr:Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-20T18:26:24.544 INFO:tasks.ceph.mgr.y.vm00.stderr: from numpy import show_config as show_numpy_config 2026-03-20T18:26:24.555 INFO:tasks.ceph.mgr.x.vm02.stderr:/usr/lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-20T18:26:24.555 INFO:tasks.ceph.mgr.x.vm02.stderr:Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-20T18:26:24.555 INFO:tasks.ceph.mgr.x.vm02.stderr: from numpy import show_config as show_numpy_config 2026-03-20T18:26:24.630 INFO:teuthology.orchestra.run.vm02.stdout:1 2026-03-20T18:26:24.642 DEBUG:teuthology.orchestra.run.vm02:> sudo ceph --cluster ceph osd new 373fd0ae-fd8f-49e3-9818-d39d924863cd 2 2026-03-20T18:26:24.769 INFO:teuthology.orchestra.run.vm02.stdout:2 2026-03-20T18:26:24.779 DEBUG:teuthology.orchestra.run.vm02:> sudo ceph --cluster ceph osd new 5e6c88a6-dd56-44f3-9a9b-b2b2877ceded 3 2026-03-20T18:26:24.901 INFO:teuthology.orchestra.run.vm02.stdout:3 2026-03-20T18:26:24.910 DEBUG:teuthology.orchestra.run.vm02:> sudo ceph --cluster ceph osd new 7af8c08a-731b-4b2c-9fd0-6af79b36888c 4 2026-03-20T18:26:25.031 INFO:teuthology.orchestra.run.vm02.stdout:4 2026-03-20T18:26:25.041 DEBUG:teuthology.orchestra.run.vm02:> sudo ceph --cluster ceph osd new 39e7f44d-3279-4660-a953-f8f725495058 5 2026-03-20T18:26:25.159 INFO:teuthology.orchestra.run.vm02.stdout:5 2026-03-20T18:26:25.169 DEBUG:teuthology.orchestra.run.vm02:> sudo ceph --cluster ceph osd new 5f1a9df7-a8c6-40c0-9dc6-725daf321341 6 2026-03-20T18:26:25.289 INFO:teuthology.orchestra.run.vm02.stdout:6 2026-03-20T18:26:25.298 DEBUG:teuthology.orchestra.run.vm02:> sudo ceph --cluster ceph osd new dc331bc1-918e-4544-9790-d94c1b638e33 7 2026-03-20T18:26:25.427 INFO:teuthology.orchestra.run.vm02.stdout:7 2026-03-20T18:26:25.437 INFO:tasks.ceph.osd.0:Restarting daemon 2026-03-20T18:26:25.438 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 0 2026-03-20T18:26:25.439 INFO:tasks.ceph.osd.0:Started 2026-03-20T18:26:25.439 INFO:tasks.ceph.osd.1:Restarting daemon 2026-03-20T18:26:25.439 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1 2026-03-20T18:26:25.440 INFO:tasks.ceph.osd.1:Started 2026-03-20T18:26:25.441 INFO:tasks.ceph.osd.2:Restarting daemon 2026-03-20T18:26:25.441 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 2 2026-03-20T18:26:25.443 INFO:tasks.ceph.osd.2:Started 2026-03-20T18:26:25.443 INFO:tasks.ceph.osd.3:Restarting daemon 2026-03-20T18:26:25.443 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 3 2026-03-20T18:26:25.444 INFO:tasks.ceph.osd.3:Started 2026-03-20T18:26:25.444 INFO:tasks.ceph.osd.4:Restarting daemon 2026-03-20T18:26:25.444 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 4 2026-03-20T18:26:25.446 INFO:tasks.ceph.osd.4:Started 2026-03-20T18:26:25.446 INFO:tasks.ceph.osd.5:Restarting daemon 2026-03-20T18:26:25.446 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 5 2026-03-20T18:26:25.447 INFO:tasks.ceph.osd.5:Started 2026-03-20T18:26:25.447 INFO:tasks.ceph.osd.6:Restarting daemon 2026-03-20T18:26:25.447 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 6 2026-03-20T18:26:25.451 INFO:tasks.ceph.osd.6:Started 2026-03-20T18:26:25.451 INFO:tasks.ceph.osd.7:Restarting daemon 2026-03-20T18:26:25.451 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 7 2026-03-20T18:26:25.453 INFO:tasks.ceph.osd.7:Started 2026-03-20T18:26:25.453 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-03-20T18:26:25.592 INFO:tasks.ceph.osd.3.vm00.stderr:2026-03-20T18:26:25.590+0000 7f8b82f01900 -1 Falling back to public interface 2026-03-20T18:26:25.604 INFO:tasks.ceph.osd.0.vm00.stderr:2026-03-20T18:26:25.603+0000 7fc56120d900 -1 Falling back to public interface 2026-03-20T18:26:25.609 INFO:tasks.ceph.osd.1.vm00.stderr:2026-03-20T18:26:25.608+0000 7fdec0a05900 -1 Falling back to public interface 2026-03-20T18:26:25.614 INFO:tasks.ceph.osd.5.vm02.stderr:2026-03-20T18:26:25.613+0000 7fcd413c9900 -1 Falling back to public interface 2026-03-20T18:26:25.615 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:25.615 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":10,"fsid":"e1f9fff1-39d0-4146-abc7-3b481a096f4f","created":"2026-03-20T18:26:23.709654+0000","modified":"2026-03-20T18:26:25.421664+0000","last_up_change":"0.000000","last_in_change":"2026-03-20T18:26:25.421664+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":2,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"4d30620c-8de8-4804-b27e-fead5a2c9a3b","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":1,"uuid":"69004132-8b19-4ed3-995c-8404da69c0bb","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":2,"uuid":"373fd0ae-fd8f-49e3-9818-d39d924863cd","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":3,"uuid":"5e6c88a6-dd56-44f3-9a9b-b2b2877ceded","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":4,"uuid":"7af8c08a-731b-4b2c-9fd0-6af79b36888c","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":5,"uuid":"39e7f44d-3279-4660-a953-f8f725495058","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":6,"uuid":"5f1a9df7-a8c6-40c0-9dc6-725daf321341","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":7,"uuid":"dc331bc1-918e-4544-9790-d94c1b638e33","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-20T18:26:25.617 INFO:tasks.ceph.osd.2.vm00.stderr:2026-03-20T18:26:25.616+0000 7f80afdc1900 -1 Falling back to public interface 2026-03-20T18:26:25.624 INFO:tasks.ceph.ceph_manager.ceph:[] 2026-03-20T18:26:25.624 INFO:tasks.ceph:Waiting for OSDs to come up 2026-03-20T18:26:25.631 INFO:tasks.ceph.osd.4.vm02.stderr:2026-03-20T18:26:25.630+0000 7fa43515a900 -1 Falling back to public interface 2026-03-20T18:26:25.637 INFO:tasks.ceph.osd.7.vm02.stderr:2026-03-20T18:26:25.636+0000 7f01aa0e0900 -1 Falling back to public interface 2026-03-20T18:26:25.644 INFO:tasks.ceph.osd.6.vm02.stderr:2026-03-20T18:26:25.643+0000 7fb808356900 -1 Falling back to public interface 2026-03-20T18:26:25.994 INFO:tasks.ceph.osd.3.vm00.stderr:2026-03-20T18:26:25.992+0000 7f8b82f01900 -1 osd.3 0 log_to_monitors true 2026-03-20T18:26:26.030 INFO:tasks.ceph.osd.2.vm00.stderr:2026-03-20T18:26:26.028+0000 7f80afdc1900 -1 osd.2 0 log_to_monitors true 2026-03-20T18:26:26.060 INFO:tasks.ceph.osd.0.vm00.stderr:2026-03-20T18:26:26.059+0000 7fc56120d900 -1 osd.0 0 log_to_monitors true 2026-03-20T18:26:26.068 INFO:tasks.ceph.osd.6.vm02.stderr:2026-03-20T18:26:26.066+0000 7fb808356900 -1 osd.6 0 log_to_monitors true 2026-03-20T18:26:26.086 INFO:tasks.ceph.osd.7.vm02.stderr:2026-03-20T18:26:26.085+0000 7f01aa0e0900 -1 osd.7 0 log_to_monitors true 2026-03-20T18:26:26.088 INFO:tasks.ceph.osd.1.vm00.stderr:2026-03-20T18:26:26.087+0000 7fdec0a05900 -1 osd.1 0 log_to_monitors true 2026-03-20T18:26:26.118 INFO:tasks.ceph.osd.4.vm02.stderr:2026-03-20T18:26:26.117+0000 7fa43515a900 -1 osd.4 0 log_to_monitors true 2026-03-20T18:26:26.133 INFO:tasks.ceph.osd.5.vm02.stderr:2026-03-20T18:26:26.132+0000 7fcd413c9900 -1 osd.5 0 log_to_monitors true 2026-03-20T18:26:26.429 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json 2026-03-20T18:26:26.528 INFO:teuthology.misc.health.vm00.stdout: 2026-03-20T18:26:26.529 INFO:teuthology.misc.health.vm00.stdout:{"epoch":10,"fsid":"e1f9fff1-39d0-4146-abc7-3b481a096f4f","created":"2026-03-20T18:26:23.709654+0000","modified":"2026-03-20T18:26:25.421664+0000","last_up_change":"0.000000","last_in_change":"2026-03-20T18:26:25.421664+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":2,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"4d30620c-8de8-4804-b27e-fead5a2c9a3b","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":1,"uuid":"69004132-8b19-4ed3-995c-8404da69c0bb","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":2,"uuid":"373fd0ae-fd8f-49e3-9818-d39d924863cd","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":3,"uuid":"5e6c88a6-dd56-44f3-9a9b-b2b2877ceded","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":4,"uuid":"7af8c08a-731b-4b2c-9fd0-6af79b36888c","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":5,"uuid":"39e7f44d-3279-4660-a953-f8f725495058","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":6,"uuid":"5f1a9df7-a8c6-40c0-9dc6-725daf321341","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":7,"uuid":"dc331bc1-918e-4544-9790-d94c1b638e33","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-20T18:26:26.535 DEBUG:teuthology.misc:0 of 8 OSDs are up 2026-03-20T18:26:27.255 INFO:tasks.ceph.mgr.x.vm02.stderr:2026-03-20T18:26:27.254+0000 7f87391ed640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-20T18:26:27.255 INFO:tasks.ceph.mgr.x.vm02.stderr:2026-03-20T18:26:27.254+0000 7f87391ed640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-20T18:26:27.737 INFO:tasks.ceph.osd.7.vm02.stderr:2026-03-20T18:26:27.737+0000 7f01a584c640 -1 osd.7 0 waiting for initial osdmap 2026-03-20T18:26:27.743 INFO:tasks.ceph.osd.2.vm00.stderr:2026-03-20T18:26:27.742+0000 7f80ab52f640 -1 osd.2 0 waiting for initial osdmap 2026-03-20T18:26:27.744 INFO:tasks.ceph.osd.7.vm02.stderr:2026-03-20T18:26:27.743+0000 7f01a0e63640 -1 osd.7 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-20T18:26:27.744 INFO:tasks.ceph.osd.6.vm02.stderr:2026-03-20T18:26:27.744+0000 7fb803ac2640 -1 osd.6 0 waiting for initial osdmap 2026-03-20T18:26:27.745 INFO:tasks.ceph.osd.1.vm00.stderr:2026-03-20T18:26:27.744+0000 7fdebd1a8640 -1 osd.1 0 waiting for initial osdmap 2026-03-20T18:26:27.746 INFO:tasks.ceph.osd.4.vm02.stderr:2026-03-20T18:26:27.744+0000 7fa4308c6640 -1 osd.4 0 waiting for initial osdmap 2026-03-20T18:26:27.747 INFO:tasks.ceph.osd.3.vm00.stderr:2026-03-20T18:26:27.746+0000 7f8b7e66d640 -1 osd.3 0 waiting for initial osdmap 2026-03-20T18:26:27.747 INFO:tasks.ceph.osd.0.vm00.stderr:2026-03-20T18:26:27.746+0000 7fc55c197640 -1 osd.0 0 waiting for initial osdmap 2026-03-20T18:26:27.748 INFO:tasks.ceph.osd.2.vm00.stderr:2026-03-20T18:26:27.746+0000 7f80a6b46640 -1 osd.2 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-20T18:26:27.751 INFO:tasks.ceph.osd.1.vm00.stderr:2026-03-20T18:26:27.749+0000 7fdeb778a640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-20T18:26:27.751 INFO:tasks.ceph.osd.3.vm00.stderr:2026-03-20T18:26:27.750+0000 7f8b79c84640 -1 osd.3 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-20T18:26:27.752 INFO:tasks.ceph.osd.0.vm00.stderr:2026-03-20T18:26:27.751+0000 7fc5577ae640 -1 osd.0 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-20T18:26:27.754 INFO:tasks.ceph.osd.5.vm02.stderr:2026-03-20T18:26:27.754+0000 7fcd3bba0640 -1 osd.5 0 waiting for initial osdmap 2026-03-20T18:26:27.757 INFO:tasks.ceph.osd.5.vm02.stderr:2026-03-20T18:26:27.757+0000 7fcd371b7640 -1 osd.5 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-20T18:26:27.759 INFO:tasks.ceph.osd.4.vm02.stderr:2026-03-20T18:26:27.758+0000 7fa42bedd640 -1 osd.4 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-20T18:26:27.762 INFO:tasks.ceph.osd.6.vm02.stderr:2026-03-20T18:26:27.762+0000 7fb7ff0d9640 -1 osd.6 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-20T18:26:33.339 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json 2026-03-20T18:26:33.535 INFO:teuthology.misc.health.vm00.stdout: 2026-03-20T18:26:33.535 INFO:teuthology.misc.health.vm00.stdout:{"epoch":17,"fsid":"e1f9fff1-39d0-4146-abc7-3b481a096f4f","created":"2026-03-20T18:26:23.709654+0000","modified":"2026-03-20T18:26:32.783268+0000","last_up_change":"2026-03-20T18:26:28.740700+0000","last_in_change":"2026-03-20T18:26:25.421664+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-20T18:26:30.261400+0000","flags":1,"flags_names":"hashpspool","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"17","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":8,"score_stable":8,"optimal_score":0.25,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"4d30620c-8de8-4804-b27e-fead5a2c9a3b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":2184203486},{"type":"v1","addr":"192.168.123.100:6809","nonce":2184203486}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":2184203486},{"type":"v1","addr":"192.168.123.100:6811","nonce":2184203486}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":2184203486},{"type":"v1","addr":"192.168.123.100:6815","nonce":2184203486}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":2184203486},{"type":"v1","addr":"192.168.123.100:6813","nonce":2184203486}]},"public_addr":"192.168.123.100:6809/2184203486","cluster_addr":"192.168.123.100:6811/2184203486","heartbeat_back_addr":"192.168.123.100:6815/2184203486","heartbeat_front_addr":"192.168.123.100:6813/2184203486","state":["exists","up"]},{"osd":1,"uuid":"69004132-8b19-4ed3-995c-8404da69c0bb","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":1800934901},{"type":"v1","addr":"192.168.123.100:6817","nonce":1800934901}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":1800934901},{"type":"v1","addr":"192.168.123.100:6819","nonce":1800934901}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":1800934901},{"type":"v1","addr":"192.168.123.100:6823","nonce":1800934901}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":1800934901},{"type":"v1","addr":"192.168.123.100:6821","nonce":1800934901}]},"public_addr":"192.168.123.100:6817/1800934901","cluster_addr":"192.168.123.100:6819/1800934901","heartbeat_back_addr":"192.168.123.100:6823/1800934901","heartbeat_front_addr":"192.168.123.100:6821/1800934901","state":["exists","up"]},{"osd":2,"uuid":"373fd0ae-fd8f-49e3-9818-d39d924863cd","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":3070585652},{"type":"v1","addr":"192.168.123.100:6825","nonce":3070585652}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":3070585652},{"type":"v1","addr":"192.168.123.100:6827","nonce":3070585652}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":3070585652},{"type":"v1","addr":"192.168.123.100:6831","nonce":3070585652}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":3070585652},{"type":"v1","addr":"192.168.123.100:6829","nonce":3070585652}]},"public_addr":"192.168.123.100:6825/3070585652","cluster_addr":"192.168.123.100:6827/3070585652","heartbeat_back_addr":"192.168.123.100:6831/3070585652","heartbeat_front_addr":"192.168.123.100:6829/3070585652","state":["exists","up"]},{"osd":3,"uuid":"5e6c88a6-dd56-44f3-9a9b-b2b2877ceded","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6800","nonce":3791634668},{"type":"v1","addr":"192.168.123.100:6801","nonce":3791634668}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":3791634668},{"type":"v1","addr":"192.168.123.100:6803","nonce":3791634668}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":3791634668},{"type":"v1","addr":"192.168.123.100:6807","nonce":3791634668}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":3791634668},{"type":"v1","addr":"192.168.123.100:6805","nonce":3791634668}]},"public_addr":"192.168.123.100:6801/3791634668","cluster_addr":"192.168.123.100:6803/3791634668","heartbeat_back_addr":"192.168.123.100:6807/3791634668","heartbeat_front_addr":"192.168.123.100:6805/3791634668","state":["exists","up"]},{"osd":4,"uuid":"7af8c08a-731b-4b2c-9fd0-6af79b36888c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":2968194588},{"type":"v1","addr":"192.168.123.102:6809","nonce":2968194588}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6810","nonce":2968194588},{"type":"v1","addr":"192.168.123.102:6811","nonce":2968194588}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6814","nonce":2968194588},{"type":"v1","addr":"192.168.123.102:6815","nonce":2968194588}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6812","nonce":2968194588},{"type":"v1","addr":"192.168.123.102:6813","nonce":2968194588}]},"public_addr":"192.168.123.102:6809/2968194588","cluster_addr":"192.168.123.102:6811/2968194588","heartbeat_back_addr":"192.168.123.102:6815/2968194588","heartbeat_front_addr":"192.168.123.102:6813/2968194588","state":["exists","up"]},{"osd":5,"uuid":"39e7f44d-3279-4660-a953-f8f725495058","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6800","nonce":3886365557},{"type":"v1","addr":"192.168.123.102:6801","nonce":3886365557}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":3886365557},{"type":"v1","addr":"192.168.123.102:6803","nonce":3886365557}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":3886365557},{"type":"v1","addr":"192.168.123.102:6807","nonce":3886365557}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":3886365557},{"type":"v1","addr":"192.168.123.102:6805","nonce":3886365557}]},"public_addr":"192.168.123.102:6801/3886365557","cluster_addr":"192.168.123.102:6803/3886365557","heartbeat_back_addr":"192.168.123.102:6807/3886365557","heartbeat_front_addr":"192.168.123.102:6805/3886365557","state":["exists","up"]},{"osd":6,"uuid":"5f1a9df7-a8c6-40c0-9dc6-725daf321341","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6824","nonce":1378985980},{"type":"v1","addr":"192.168.123.102:6825","nonce":1378985980}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":1378985980},{"type":"v1","addr":"192.168.123.102:6827","nonce":1378985980}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6830","nonce":1378985980},{"type":"v1","addr":"192.168.123.102:6831","nonce":1378985980}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":1378985980},{"type":"v1","addr":"192.168.123.102:6829","nonce":1378985980}]},"public_addr":"192.168.123.102:6825/1378985980","cluster_addr":"192.168.123.102:6827/1378985980","heartbeat_back_addr":"192.168.123.102:6831/1378985980","heartbeat_front_addr":"192.168.123.102:6829/1378985980","state":["exists","up"]},{"osd":7,"uuid":"dc331bc1-918e-4544-9790-d94c1b638e33","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":15,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6816","nonce":1394504320},{"type":"v1","addr":"192.168.123.102:6817","nonce":1394504320}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6818","nonce":1394504320},{"type":"v1","addr":"192.168.123.102:6819","nonce":1394504320}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6822","nonce":1394504320},{"type":"v1","addr":"192.168.123.102:6823","nonce":1394504320}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6820","nonce":1394504320},{"type":"v1","addr":"192.168.123.102:6821","nonce":1394504320}]},"public_addr":"192.168.123.102:6817/1394504320","cluster_addr":"192.168.123.102:6819/1394504320","heartbeat_back_addr":"192.168.123.102:6823/1394504320","heartbeat_front_addr":"192.168.123.102:6821/1394504320","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.078082+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.120857+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.017390+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.041786+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.088146+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.173339+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.080770+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.089545+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-20T18:26:33.546 DEBUG:teuthology.misc:8 of 8 OSDs are up 2026-03-20T18:26:33.546 INFO:tasks.ceph:Creating RBD pool 2026-03-20T18:26:33.546 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph --cluster ceph osd pool create rbd 8 2026-03-20T18:26:33.801 INFO:teuthology.orchestra.run.vm00.stderr:pool 'rbd' created 2026-03-20T18:26:33.815 DEBUG:teuthology.orchestra.run.vm00:> rbd --cluster ceph pool init rbd 2026-03-20T18:26:33.847 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-03-20T18:26:33.847 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T18:26:36.833 INFO:tasks.ceph:Starting mds daemons in cluster ceph... 2026-03-20T18:26:36.833 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph config log 1 --format=json 2026-03-20T18:26:36.833 INFO:tasks.daemonwatchdog.daemon_watchdog:watchdog starting 2026-03-20T18:26:37.082 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:37.094 INFO:teuthology.orchestra.run.vm00.stdout:[{"version":1,"timestamp":"0.000000","name":"","changes":[]}] 2026-03-20T18:26:37.094 INFO:tasks.ceph_manager:config epoch is 1 2026-03-20T18:26:37.094 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-20T18:26:37.094 INFO:tasks.ceph.ceph_manager.ceph:waiting for mgr available 2026-03-20T18:26:37.094 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph mgr dump --format=json 2026-03-20T18:26:37.335 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:37.348 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":5,"flags":0,"active_gid":4103,"active_name":"x","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6832","nonce":3442782288},{"type":"v1","addr":"192.168.123.102:6833","nonce":3442782288}]},"active_addr":"192.168.123.102:6833/3442782288","active_change":"2026-03-20T18:26:26.234573+0000","active_mgr_features":4544132024016699391,"available":true,"standbys":[{"gid":4106,"name":"y","mgr_features":4544132024016699391,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to, use commas to separate multiple","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"certificate_automated_rotation_enabled":{"name":"certificate_automated_rotation_enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"This flag controls whether cephadm automatically rotates certificates upon expiration.","long_desc":"","tags":[],"see_also":[]},"certificate_check_debug_mode":{"name":"certificate_check_debug_mode","type":"bool","level":"dev","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"FOR TESTING ONLY: This flag forces the certificate check instead of waiting for certificate_check_period.","long_desc":"","tags":[],"see_also":[]},"certificate_check_period":{"name":"certificate_check_period","type":"int","level":"advanced","flags":0,"default_value":"1","min":"0","max":"30","enum_allowed":[],"desc":"Specifies how often (in days) the certificate should be checked for validity.","long_desc":"","tags":[],"see_also":[]},"certificate_duration_days":{"name":"certificate_duration_days","type":"int","level":"advanced","flags":0,"default_value":"1095","min":"90","max":"3650","enum_allowed":[],"desc":"Specifies the duration of self certificates generated and signed by cephadm root CA","long_desc":"","tags":[],"see_also":[]},"certificate_renewal_threshold_days":{"name":"certificate_renewal_threshold_days","type":"int","level":"advanced","flags":0,"default_value":"30","min":"10","max":"90","enum_allowed":[],"desc":"Specifies the lead time in days to initiate certificate renewal before expiration.","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.28.1","min":"","max":"","enum_allowed":[],"desc":"Alertmanager container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"Elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:12.3.1","min":"","max":"","enum_allowed":[],"desc":"Grafana container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"Haproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_nginx":{"name":"container_image_nginx","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nginx:sclorg-nginx-126","min":"","max":"","enum_allowed":[],"desc":"Nginx container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.9.1","min":"","max":"","enum_allowed":[],"desc":"Node exporter container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.5","min":"","max":"","enum_allowed":[],"desc":"Nvmeof container image","long_desc":"","tags":[],"see_also":[]},"container_image_oauth2_proxy":{"name":"container_image_oauth2_proxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/oauth2-proxy/oauth2-proxy:v7.6.0","min":"","max":"","enum_allowed":[],"desc":"Oauth2 proxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v3.6.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba_metrics":{"name":"container_image_samba_metrics","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-metrics:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba metrics container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"Snmp gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"stray_daemon_check_interval":{"name":"stray_daemon_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"how frequently cephadm should check for the presence of stray daemons","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MANAGED_BY_CLUSTERS":{"name":"MANAGED_BY_CLUSTERS","type":"str","level":"advanced","flags":0,"default_value":"[]","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MULTICLUSTER_CONFIG":{"name":"MULTICLUSTER_CONFIG","type":"str","level":"advanced","flags":0,"default_value":"{}","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROM_ALERT_CREDENTIAL_CACHE_TTL":{"name":"PROM_ALERT_CREDENTIAL_CACHE_TTL","type":"int","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_HOSTNAME_PER_DAEMON":{"name":"RGW_HOSTNAME_PER_DAEMON","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"UNSAFE_TLS_v1_2":{"name":"UNSAFE_TLS_v1_2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crypto_caller":{"name":"crypto_caller","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sso_oauth2":{"name":"sso_oauth2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"prometheus_tls_secret_name":{"name":"prometheus_tls_secret_name","type":"str","level":"advanced","flags":0,"default_value":"rook-ceph-prometheus-server-tls","min":"","max":"","enum_allowed":[],"desc":"name of tls secret in k8s for prometheus","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"smb","can_run":true,"error_string":"","module_options":{"internal_store_backend":{"name":"internal_store_backend","type":"str","level":"dev","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"set internal store backend. for develoment and testing only","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_orchestration":{"name":"update_orchestration","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically update orchestration when smb resources are changed","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_cloning":{"name":"pause_cloning","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_purging":{"name":"pause_purging","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous subvolume purge threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["iostat","nfs"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to, use commas to separate multiple","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"certificate_automated_rotation_enabled":{"name":"certificate_automated_rotation_enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"This flag controls whether cephadm automatically rotates certificates upon expiration.","long_desc":"","tags":[],"see_also":[]},"certificate_check_debug_mode":{"name":"certificate_check_debug_mode","type":"bool","level":"dev","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"FOR TESTING ONLY: This flag forces the certificate check instead of waiting for certificate_check_period.","long_desc":"","tags":[],"see_also":[]},"certificate_check_period":{"name":"certificate_check_period","type":"int","level":"advanced","flags":0,"default_value":"1","min":"0","max":"30","enum_allowed":[],"desc":"Specifies how often (in days) the certificate should be checked for validity.","long_desc":"","tags":[],"see_also":[]},"certificate_duration_days":{"name":"certificate_duration_days","type":"int","level":"advanced","flags":0,"default_value":"1095","min":"90","max":"3650","enum_allowed":[],"desc":"Specifies the duration of self certificates generated and signed by cephadm root CA","long_desc":"","tags":[],"see_also":[]},"certificate_renewal_threshold_days":{"name":"certificate_renewal_threshold_days","type":"int","level":"advanced","flags":0,"default_value":"30","min":"10","max":"90","enum_allowed":[],"desc":"Specifies the lead time in days to initiate certificate renewal before expiration.","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.28.1","min":"","max":"","enum_allowed":[],"desc":"Alertmanager container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"Elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:12.3.1","min":"","max":"","enum_allowed":[],"desc":"Grafana container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"Haproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_nginx":{"name":"container_image_nginx","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nginx:sclorg-nginx-126","min":"","max":"","enum_allowed":[],"desc":"Nginx container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.9.1","min":"","max":"","enum_allowed":[],"desc":"Node exporter container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.5","min":"","max":"","enum_allowed":[],"desc":"Nvmeof container image","long_desc":"","tags":[],"see_also":[]},"container_image_oauth2_proxy":{"name":"container_image_oauth2_proxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/oauth2-proxy/oauth2-proxy:v7.6.0","min":"","max":"","enum_allowed":[],"desc":"Oauth2 proxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v3.6.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba_metrics":{"name":"container_image_samba_metrics","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-metrics:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba metrics container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"Snmp gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"stray_daemon_check_interval":{"name":"stray_daemon_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"how frequently cephadm should check for the presence of stray daemons","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MANAGED_BY_CLUSTERS":{"name":"MANAGED_BY_CLUSTERS","type":"str","level":"advanced","flags":0,"default_value":"[]","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MULTICLUSTER_CONFIG":{"name":"MULTICLUSTER_CONFIG","type":"str","level":"advanced","flags":0,"default_value":"{}","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROM_ALERT_CREDENTIAL_CACHE_TTL":{"name":"PROM_ALERT_CREDENTIAL_CACHE_TTL","type":"int","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_HOSTNAME_PER_DAEMON":{"name":"RGW_HOSTNAME_PER_DAEMON","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"UNSAFE_TLS_v1_2":{"name":"UNSAFE_TLS_v1_2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crypto_caller":{"name":"crypto_caller","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sso_oauth2":{"name":"sso_oauth2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"prometheus_tls_secret_name":{"name":"prometheus_tls_secret_name","type":"str","level":"advanced","flags":0,"default_value":"rook-ceph-prometheus-server-tls","min":"","max":"","enum_allowed":[],"desc":"name of tls secret in k8s for prometheus","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"smb","can_run":true,"error_string":"","module_options":{"internal_store_backend":{"name":"internal_store_backend","type":"str","level":"dev","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"set internal store backend. for develoment and testing only","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_orchestration":{"name":"update_orchestration","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically update orchestration when smb resources are changed","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_cloning":{"name":"pause_cloning","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_purging":{"name":"pause_purging","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous subvolume purge threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"tentacle":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":0,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":2392526762}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":1613760048}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":1086939671}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":1533861395}]}]} 2026-03-20T18:26:37.349 INFO:tasks.ceph.ceph_manager.ceph:mgr available! 2026-03-20T18:26:37.349 INFO:tasks.ceph.ceph_manager.ceph:waiting for all up 2026-03-20T18:26:37.349 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-03-20T18:26:37.543 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:37.543 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":21,"fsid":"e1f9fff1-39d0-4146-abc7-3b481a096f4f","created":"2026-03-20T18:26:23.709654+0000","modified":"2026-03-20T18:26:36.816056+0000","last_up_change":"2026-03-20T18:26:28.740700+0000","last_in_change":"2026-03-20T18:26:25.421664+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":2,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-20T18:26:30.261400+0000","flags":1,"flags_names":"hashpspool","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"17","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":8,"score_stable":8,"optimal_score":0.25,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"rbd","create_time":"2026-03-20T18:26:33.745675+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":8,"pg_placement_num":8,"pg_placement_num_target":8,"pg_num_target":8,"pg_num_pending":8,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":21,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.9900000095367432,"score_stable":1.9900000095367432,"optimal_score":0.87999999523162842,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"4d30620c-8de8-4804-b27e-fead5a2c9a3b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":2184203486},{"type":"v1","addr":"192.168.123.100:6809","nonce":2184203486}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":2184203486},{"type":"v1","addr":"192.168.123.100:6811","nonce":2184203486}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":2184203486},{"type":"v1","addr":"192.168.123.100:6815","nonce":2184203486}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":2184203486},{"type":"v1","addr":"192.168.123.100:6813","nonce":2184203486}]},"public_addr":"192.168.123.100:6809/2184203486","cluster_addr":"192.168.123.100:6811/2184203486","heartbeat_back_addr":"192.168.123.100:6815/2184203486","heartbeat_front_addr":"192.168.123.100:6813/2184203486","state":["exists","up"]},{"osd":1,"uuid":"69004132-8b19-4ed3-995c-8404da69c0bb","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":1800934901},{"type":"v1","addr":"192.168.123.100:6817","nonce":1800934901}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":1800934901},{"type":"v1","addr":"192.168.123.100:6819","nonce":1800934901}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":1800934901},{"type":"v1","addr":"192.168.123.100:6823","nonce":1800934901}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":1800934901},{"type":"v1","addr":"192.168.123.100:6821","nonce":1800934901}]},"public_addr":"192.168.123.100:6817/1800934901","cluster_addr":"192.168.123.100:6819/1800934901","heartbeat_back_addr":"192.168.123.100:6823/1800934901","heartbeat_front_addr":"192.168.123.100:6821/1800934901","state":["exists","up"]},{"osd":2,"uuid":"373fd0ae-fd8f-49e3-9818-d39d924863cd","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":3070585652},{"type":"v1","addr":"192.168.123.100:6825","nonce":3070585652}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":3070585652},{"type":"v1","addr":"192.168.123.100:6827","nonce":3070585652}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":3070585652},{"type":"v1","addr":"192.168.123.100:6831","nonce":3070585652}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":3070585652},{"type":"v1","addr":"192.168.123.100:6829","nonce":3070585652}]},"public_addr":"192.168.123.100:6825/3070585652","cluster_addr":"192.168.123.100:6827/3070585652","heartbeat_back_addr":"192.168.123.100:6831/3070585652","heartbeat_front_addr":"192.168.123.100:6829/3070585652","state":["exists","up"]},{"osd":3,"uuid":"5e6c88a6-dd56-44f3-9a9b-b2b2877ceded","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6800","nonce":3791634668},{"type":"v1","addr":"192.168.123.100:6801","nonce":3791634668}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":3791634668},{"type":"v1","addr":"192.168.123.100:6803","nonce":3791634668}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":3791634668},{"type":"v1","addr":"192.168.123.100:6807","nonce":3791634668}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":3791634668},{"type":"v1","addr":"192.168.123.100:6805","nonce":3791634668}]},"public_addr":"192.168.123.100:6801/3791634668","cluster_addr":"192.168.123.100:6803/3791634668","heartbeat_back_addr":"192.168.123.100:6807/3791634668","heartbeat_front_addr":"192.168.123.100:6805/3791634668","state":["exists","up"]},{"osd":4,"uuid":"7af8c08a-731b-4b2c-9fd0-6af79b36888c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":2968194588},{"type":"v1","addr":"192.168.123.102:6809","nonce":2968194588}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6810","nonce":2968194588},{"type":"v1","addr":"192.168.123.102:6811","nonce":2968194588}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6814","nonce":2968194588},{"type":"v1","addr":"192.168.123.102:6815","nonce":2968194588}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6812","nonce":2968194588},{"type":"v1","addr":"192.168.123.102:6813","nonce":2968194588}]},"public_addr":"192.168.123.102:6809/2968194588","cluster_addr":"192.168.123.102:6811/2968194588","heartbeat_back_addr":"192.168.123.102:6815/2968194588","heartbeat_front_addr":"192.168.123.102:6813/2968194588","state":["exists","up"]},{"osd":5,"uuid":"39e7f44d-3279-4660-a953-f8f725495058","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6800","nonce":3886365557},{"type":"v1","addr":"192.168.123.102:6801","nonce":3886365557}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":3886365557},{"type":"v1","addr":"192.168.123.102:6803","nonce":3886365557}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":3886365557},{"type":"v1","addr":"192.168.123.102:6807","nonce":3886365557}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":3886365557},{"type":"v1","addr":"192.168.123.102:6805","nonce":3886365557}]},"public_addr":"192.168.123.102:6801/3886365557","cluster_addr":"192.168.123.102:6803/3886365557","heartbeat_back_addr":"192.168.123.102:6807/3886365557","heartbeat_front_addr":"192.168.123.102:6805/3886365557","state":["exists","up"]},{"osd":6,"uuid":"5f1a9df7-a8c6-40c0-9dc6-725daf321341","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6824","nonce":1378985980},{"type":"v1","addr":"192.168.123.102:6825","nonce":1378985980}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":1378985980},{"type":"v1","addr":"192.168.123.102:6827","nonce":1378985980}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6830","nonce":1378985980},{"type":"v1","addr":"192.168.123.102:6831","nonce":1378985980}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":1378985980},{"type":"v1","addr":"192.168.123.102:6829","nonce":1378985980}]},"public_addr":"192.168.123.102:6825/1378985980","cluster_addr":"192.168.123.102:6827/1378985980","heartbeat_back_addr":"192.168.123.102:6831/1378985980","heartbeat_front_addr":"192.168.123.102:6829/1378985980","state":["exists","up"]},{"osd":7,"uuid":"dc331bc1-918e-4544-9790-d94c1b638e33","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6816","nonce":1394504320},{"type":"v1","addr":"192.168.123.102:6817","nonce":1394504320}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6818","nonce":1394504320},{"type":"v1","addr":"192.168.123.102:6819","nonce":1394504320}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6822","nonce":1394504320},{"type":"v1","addr":"192.168.123.102:6823","nonce":1394504320}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6820","nonce":1394504320},{"type":"v1","addr":"192.168.123.102:6821","nonce":1394504320}]},"public_addr":"192.168.123.102:6817/1394504320","cluster_addr":"192.168.123.102:6819/1394504320","heartbeat_back_addr":"192.168.123.102:6823/1394504320","heartbeat_front_addr":"192.168.123.102:6821/1394504320","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.078082+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.120857+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.017390+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.041786+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.088146+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.173339+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.080770+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.089545+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-20T18:26:37.554 INFO:tasks.ceph.ceph_manager.ceph:all up! 2026-03-20T18:26:37.555 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-03-20T18:26:37.752 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:37.752 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":21,"fsid":"e1f9fff1-39d0-4146-abc7-3b481a096f4f","created":"2026-03-20T18:26:23.709654+0000","modified":"2026-03-20T18:26:36.816056+0000","last_up_change":"2026-03-20T18:26:28.740700+0000","last_in_change":"2026-03-20T18:26:25.421664+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":2,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-20T18:26:30.261400+0000","flags":1,"flags_names":"hashpspool","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"17","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":8,"score_stable":8,"optimal_score":0.25,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"rbd","create_time":"2026-03-20T18:26:33.745675+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":8,"pg_placement_num":8,"pg_placement_num_target":8,"pg_num_target":8,"pg_num_pending":8,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":21,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.9900000095367432,"score_stable":1.9900000095367432,"optimal_score":0.87999999523162842,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"4d30620c-8de8-4804-b27e-fead5a2c9a3b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":2184203486},{"type":"v1","addr":"192.168.123.100:6809","nonce":2184203486}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":2184203486},{"type":"v1","addr":"192.168.123.100:6811","nonce":2184203486}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":2184203486},{"type":"v1","addr":"192.168.123.100:6815","nonce":2184203486}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":2184203486},{"type":"v1","addr":"192.168.123.100:6813","nonce":2184203486}]},"public_addr":"192.168.123.100:6809/2184203486","cluster_addr":"192.168.123.100:6811/2184203486","heartbeat_back_addr":"192.168.123.100:6815/2184203486","heartbeat_front_addr":"192.168.123.100:6813/2184203486","state":["exists","up"]},{"osd":1,"uuid":"69004132-8b19-4ed3-995c-8404da69c0bb","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":1800934901},{"type":"v1","addr":"192.168.123.100:6817","nonce":1800934901}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":1800934901},{"type":"v1","addr":"192.168.123.100:6819","nonce":1800934901}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":1800934901},{"type":"v1","addr":"192.168.123.100:6823","nonce":1800934901}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":1800934901},{"type":"v1","addr":"192.168.123.100:6821","nonce":1800934901}]},"public_addr":"192.168.123.100:6817/1800934901","cluster_addr":"192.168.123.100:6819/1800934901","heartbeat_back_addr":"192.168.123.100:6823/1800934901","heartbeat_front_addr":"192.168.123.100:6821/1800934901","state":["exists","up"]},{"osd":2,"uuid":"373fd0ae-fd8f-49e3-9818-d39d924863cd","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":3070585652},{"type":"v1","addr":"192.168.123.100:6825","nonce":3070585652}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":3070585652},{"type":"v1","addr":"192.168.123.100:6827","nonce":3070585652}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":3070585652},{"type":"v1","addr":"192.168.123.100:6831","nonce":3070585652}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":3070585652},{"type":"v1","addr":"192.168.123.100:6829","nonce":3070585652}]},"public_addr":"192.168.123.100:6825/3070585652","cluster_addr":"192.168.123.100:6827/3070585652","heartbeat_back_addr":"192.168.123.100:6831/3070585652","heartbeat_front_addr":"192.168.123.100:6829/3070585652","state":["exists","up"]},{"osd":3,"uuid":"5e6c88a6-dd56-44f3-9a9b-b2b2877ceded","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6800","nonce":3791634668},{"type":"v1","addr":"192.168.123.100:6801","nonce":3791634668}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":3791634668},{"type":"v1","addr":"192.168.123.100:6803","nonce":3791634668}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":3791634668},{"type":"v1","addr":"192.168.123.100:6807","nonce":3791634668}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":3791634668},{"type":"v1","addr":"192.168.123.100:6805","nonce":3791634668}]},"public_addr":"192.168.123.100:6801/3791634668","cluster_addr":"192.168.123.100:6803/3791634668","heartbeat_back_addr":"192.168.123.100:6807/3791634668","heartbeat_front_addr":"192.168.123.100:6805/3791634668","state":["exists","up"]},{"osd":4,"uuid":"7af8c08a-731b-4b2c-9fd0-6af79b36888c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":2968194588},{"type":"v1","addr":"192.168.123.102:6809","nonce":2968194588}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6810","nonce":2968194588},{"type":"v1","addr":"192.168.123.102:6811","nonce":2968194588}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6814","nonce":2968194588},{"type":"v1","addr":"192.168.123.102:6815","nonce":2968194588}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6812","nonce":2968194588},{"type":"v1","addr":"192.168.123.102:6813","nonce":2968194588}]},"public_addr":"192.168.123.102:6809/2968194588","cluster_addr":"192.168.123.102:6811/2968194588","heartbeat_back_addr":"192.168.123.102:6815/2968194588","heartbeat_front_addr":"192.168.123.102:6813/2968194588","state":["exists","up"]},{"osd":5,"uuid":"39e7f44d-3279-4660-a953-f8f725495058","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6800","nonce":3886365557},{"type":"v1","addr":"192.168.123.102:6801","nonce":3886365557}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":3886365557},{"type":"v1","addr":"192.168.123.102:6803","nonce":3886365557}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":3886365557},{"type":"v1","addr":"192.168.123.102:6807","nonce":3886365557}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":3886365557},{"type":"v1","addr":"192.168.123.102:6805","nonce":3886365557}]},"public_addr":"192.168.123.102:6801/3886365557","cluster_addr":"192.168.123.102:6803/3886365557","heartbeat_back_addr":"192.168.123.102:6807/3886365557","heartbeat_front_addr":"192.168.123.102:6805/3886365557","state":["exists","up"]},{"osd":6,"uuid":"5f1a9df7-a8c6-40c0-9dc6-725daf321341","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6824","nonce":1378985980},{"type":"v1","addr":"192.168.123.102:6825","nonce":1378985980}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":1378985980},{"type":"v1","addr":"192.168.123.102:6827","nonce":1378985980}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6830","nonce":1378985980},{"type":"v1","addr":"192.168.123.102:6831","nonce":1378985980}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":1378985980},{"type":"v1","addr":"192.168.123.102:6829","nonce":1378985980}]},"public_addr":"192.168.123.102:6825/1378985980","cluster_addr":"192.168.123.102:6827/1378985980","heartbeat_back_addr":"192.168.123.102:6831/1378985980","heartbeat_front_addr":"192.168.123.102:6829/1378985980","state":["exists","up"]},{"osd":7,"uuid":"dc331bc1-918e-4544-9790-d94c1b638e33","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":18,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6816","nonce":1394504320},{"type":"v1","addr":"192.168.123.102:6817","nonce":1394504320}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6818","nonce":1394504320},{"type":"v1","addr":"192.168.123.102:6819","nonce":1394504320}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6822","nonce":1394504320},{"type":"v1","addr":"192.168.123.102:6823","nonce":1394504320}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6820","nonce":1394504320},{"type":"v1","addr":"192.168.123.102:6821","nonce":1394504320}]},"public_addr":"192.168.123.102:6817/1394504320","cluster_addr":"192.168.123.102:6819/1394504320","heartbeat_back_addr":"192.168.123.102:6823/1394504320","heartbeat_front_addr":"192.168.123.102:6821/1394504320","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.078082+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.120857+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.017390+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.041786+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.088146+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.173339+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.080770+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"2026-03-20T18:26:27.089545+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-20T18:26:37.763 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.0 flush_pg_stats 2026-03-20T18:26:37.763 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.1 flush_pg_stats 2026-03-20T18:26:37.764 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.2 flush_pg_stats 2026-03-20T18:26:37.764 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.3 flush_pg_stats 2026-03-20T18:26:37.764 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.4 flush_pg_stats 2026-03-20T18:26:37.764 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.5 flush_pg_stats 2026-03-20T18:26:37.764 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.6 flush_pg_stats 2026-03-20T18:26:37.764 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.7 flush_pg_stats 2026-03-20T18:26:37.950 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T18:26:37.950 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.5 2026-03-20T18:26:37.973 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T18:26:37.973 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.6 2026-03-20T18:26:37.980 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T18:26:37.980 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.4 2026-03-20T18:26:37.983 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T18:26:37.984 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.2 2026-03-20T18:26:38.000 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T18:26:38.000 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.3 2026-03-20T18:26:38.007 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T18:26:38.007 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.0 2026-03-20T18:26:38.018 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T18:26:38.018 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.1 2026-03-20T18:26:38.029 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T18:26:38.035 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.7 2026-03-20T18:26:38.301 INFO:teuthology.orchestra.run.vm00.stdout:55834574850 2026-03-20T18:26:38.313 INFO:teuthology.orchestra.run.vm00.stdout:55834574850 2026-03-20T18:26:38.334 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574850 for osd.5 2026-03-20T18:26:38.334 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574850 for osd.3 2026-03-20T18:26:38.366 INFO:teuthology.orchestra.run.vm00.stdout:55834574850 2026-03-20T18:26:38.395 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574850 for osd.6 2026-03-20T18:26:38.426 INFO:teuthology.orchestra.run.vm00.stdout:55834574850 2026-03-20T18:26:38.428 INFO:teuthology.orchestra.run.vm00.stdout:55834574850 2026-03-20T18:26:38.436 INFO:teuthology.orchestra.run.vm00.stdout:55834574850 2026-03-20T18:26:38.443 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574850 for osd.4 2026-03-20T18:26:38.446 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574850 for osd.0 2026-03-20T18:26:38.454 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574850 for osd.7 2026-03-20T18:26:38.458 INFO:teuthology.orchestra.run.vm00.stdout:55834574850 2026-03-20T18:26:38.470 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574850 for osd.2 2026-03-20T18:26:38.521 INFO:teuthology.orchestra.run.vm00.stdout:55834574850 2026-03-20T18:26:38.533 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574850 for osd.1 2026-03-20T18:26:39.334 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.5 2026-03-20T18:26:39.335 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.3 2026-03-20T18:26:39.396 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.6 2026-03-20T18:26:39.444 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.4 2026-03-20T18:26:39.446 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.0 2026-03-20T18:26:39.454 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.7 2026-03-20T18:26:39.471 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.2 2026-03-20T18:26:39.533 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.1 2026-03-20T18:26:39.628 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T18:26:39.640 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T18:26:39.651 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574851 for osd.5 2026-03-20T18:26:39.652 DEBUG:teuthology.parallel:result is None 2026-03-20T18:26:39.662 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574851 for osd.3 2026-03-20T18:26:39.662 DEBUG:teuthology.parallel:result is None 2026-03-20T18:26:39.723 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T18:26:39.753 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574851 for osd.6 2026-03-20T18:26:39.753 DEBUG:teuthology.parallel:result is None 2026-03-20T18:26:39.807 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T18:26:39.810 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T18:26:39.814 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T18:26:39.825 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574851 for osd.0 2026-03-20T18:26:39.825 DEBUG:teuthology.parallel:result is None 2026-03-20T18:26:39.834 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574851 for osd.7 2026-03-20T18:26:39.835 DEBUG:teuthology.parallel:result is None 2026-03-20T18:26:39.836 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574851 for osd.4 2026-03-20T18:26:39.836 DEBUG:teuthology.parallel:result is None 2026-03-20T18:26:39.858 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T18:26:39.866 INFO:teuthology.orchestra.run.vm00.stdout:55834574851 2026-03-20T18:26:39.872 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574851 for osd.1 2026-03-20T18:26:39.872 DEBUG:teuthology.parallel:result is None 2026-03-20T18:26:39.880 INFO:tasks.ceph.ceph_manager.ceph:need seq 55834574851 got 55834574851 for osd.2 2026-03-20T18:26:39.880 DEBUG:teuthology.parallel:result is None 2026-03-20T18:26:39.880 INFO:tasks.ceph.ceph_manager.ceph:waiting for clean 2026-03-20T18:26:39.880 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-03-20T18:26:40.120 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:40.121 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-20T18:26:40.134 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":19,"stamp":"2026-03-20T18:26:38.247255+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":590387,"num_objects":4,"num_object_clones":0,"num_object_copies":8,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":126,"num_read_kb":109,"num_write":235,"num_write_kb":4762,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":95,"ondisk_log_size":95,"up":18,"acting":18,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":12,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":7,"kb":754974720,"kb_used":217644,"kb_used_data":2828,"kb_used_omap":64,"kb_used_meta":214463,"kb_avail":754757076,"statfs":{"total":773094113280,"available":772871245824,"internally_reserved":0,"allocated":2895872,"data_stored":1749654,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":65658,"internal_metadata":219611014},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"3.441207"},"pg_stats":[{"pgid":"2.7","version":"0'0","reported_seq":20,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T18:26:37.167943+0000","last_change":"2026-03-20T18:26:37.168015+0000","last_active":"2026-03-20T18:26:37.167943+0000","last_peered":"2026-03-20T18:26:37.167943+0000","last_clean":"2026-03-20T18:26:37.167943+0000","last_became_active":"2026-03-20T18:26:34.808912+0000","last_became_peered":"2026-03-20T18:26:34.808912+0000","last_unstale":"2026-03-20T18:26:37.167943+0000","last_undegraded":"2026-03-20T18:26:37.167943+0000","last_fullsized":"2026-03-20T18:26:37.167943+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_clean_scrub_stamp":"2026-03-20T18:26:33.790794+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T00:02:10.147344+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00015741400000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,7],"acting":[6,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.6","version":"0'0","reported_seq":20,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T18:26:36.825216+0000","last_change":"2026-03-20T18:26:36.825351+0000","last_active":"2026-03-20T18:26:36.825216+0000","last_peered":"2026-03-20T18:26:36.825216+0000","last_clean":"2026-03-20T18:26:36.825216+0000","last_became_active":"2026-03-20T18:26:34.811439+0000","last_became_peered":"2026-03-20T18:26:34.811439+0000","last_unstale":"2026-03-20T18:26:36.825216+0000","last_undegraded":"2026-03-20T18:26:36.825216+0000","last_fullsized":"2026-03-20T18:26:36.825216+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_clean_scrub_stamp":"2026-03-20T18:26:33.790794+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T04:26:44.551572+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00098586599999999996,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6],"acting":[1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.5","version":"0'0","reported_seq":20,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T18:26:36.825358+0000","last_change":"2026-03-20T18:26:36.825482+0000","last_active":"2026-03-20T18:26:36.825358+0000","last_peered":"2026-03-20T18:26:36.825358+0000","last_clean":"2026-03-20T18:26:36.825358+0000","last_became_active":"2026-03-20T18:26:34.809668+0000","last_became_peered":"2026-03-20T18:26:34.809668+0000","last_unstale":"2026-03-20T18:26:36.825358+0000","last_undegraded":"2026-03-20T18:26:36.825358+0000","last_fullsized":"2026-03-20T18:26:36.825358+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_clean_scrub_stamp":"2026-03-20T18:26:33.790794+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T22:37:36.908011+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00049786199999999996,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0],"acting":[7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.4","version":"0'0","reported_seq":20,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T18:26:36.825242+0000","last_change":"2026-03-20T18:26:36.825353+0000","last_active":"2026-03-20T18:26:36.825242+0000","last_peered":"2026-03-20T18:26:36.825242+0000","last_clean":"2026-03-20T18:26:36.825242+0000","last_became_active":"2026-03-20T18:26:34.812023+0000","last_became_peered":"2026-03-20T18:26:34.812023+0000","last_unstale":"2026-03-20T18:26:36.825242+0000","last_undegraded":"2026-03-20T18:26:36.825242+0000","last_fullsized":"2026-03-20T18:26:36.825242+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_clean_scrub_stamp":"2026-03-20T18:26:33.790794+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T05:16:14.283906+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00085044099999999996,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.2","version":"21'2","reported_seq":22,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T18:26:36.828642+0000","last_change":"2026-03-20T18:26:36.828642+0000","last_active":"2026-03-20T18:26:36.828642+0000","last_peered":"2026-03-20T18:26:36.828642+0000","last_clean":"2026-03-20T18:26:36.828642+0000","last_became_active":"2026-03-20T18:26:34.811618+0000","last_became_peered":"2026-03-20T18:26:34.811618+0000","last_unstale":"2026-03-20T18:26:36.828642+0000","last_undegraded":"2026-03-20T18:26:36.828642+0000","last_fullsized":"2026-03-20T18:26:36.828642+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_clean_scrub_stamp":"2026-03-20T18:26:33.790794+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T22:01:30.487595+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00065448500000000001,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1],"acting":[5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.1","version":"0'0","reported_seq":20,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T18:26:37.073041+0000","last_change":"2026-03-20T18:26:37.073127+0000","last_active":"2026-03-20T18:26:37.073041+0000","last_peered":"2026-03-20T18:26:37.073041+0000","last_clean":"2026-03-20T18:26:37.073041+0000","last_became_active":"2026-03-20T18:26:35.142116+0000","last_became_peered":"2026-03-20T18:26:35.142116+0000","last_unstale":"2026-03-20T18:26:37.073041+0000","last_undegraded":"2026-03-20T18:26:37.073041+0000","last_fullsized":"2026-03-20T18:26:37.073041+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_clean_scrub_stamp":"2026-03-20T18:26:33.790794+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T23:41:44.440453+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00018643899999999999,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3],"acting":[2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.0","version":"0'0","reported_seq":20,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T18:26:36.825505+0000","last_change":"2026-03-20T18:26:36.825642+0000","last_active":"2026-03-20T18:26:36.825505+0000","last_peered":"2026-03-20T18:26:36.825505+0000","last_clean":"2026-03-20T18:26:36.825505+0000","last_became_active":"2026-03-20T18:26:34.812194+0000","last_became_peered":"2026-03-20T18:26:34.812194+0000","last_unstale":"2026-03-20T18:26:36.825505+0000","last_undegraded":"2026-03-20T18:26:36.825505+0000","last_fullsized":"2026-03-20T18:26:36.825505+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_clean_scrub_stamp":"2026-03-20T18:26:33.790794+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T05:18:49.487953+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00056843400000000004,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1],"acting":[7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.3","version":"19'1","reported_seq":21,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T18:26:36.828066+0000","last_change":"2026-03-20T18:26:36.828170+0000","last_active":"2026-03-20T18:26:36.828066+0000","last_peered":"2026-03-20T18:26:36.828066+0000","last_clean":"2026-03-20T18:26:36.828066+0000","last_became_active":"2026-03-20T18:26:34.809407+0000","last_became_peered":"2026-03-20T18:26:34.809407+0000","last_unstale":"2026-03-20T18:26:36.828066+0000","last_undegraded":"2026-03-20T18:26:36.828066+0000","last_fullsized":"2026-03-20T18:26:36.828066+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_clean_scrub_stamp":"2026-03-20T18:26:33.790794+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T00:51:34.493085+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00021676599999999999,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2],"acting":[5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.0","version":"16'192","reported_seq":249,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T18:26:36.825110+0000","last_change":"2026-03-20T18:26:31.835164+0000","last_active":"2026-03-20T18:26:36.825110+0000","last_peered":"2026-03-20T18:26:36.825110+0000","last_clean":"2026-03-20T18:26:36.825110+0000","last_became_active":"2026-03-20T18:26:31.834640+0000","last_became_peered":"2026-03-20T18:26:31.834640+0000","last_unstale":"2026-03-20T18:26:36.825110+0000","last_undegraded":"2026-03-20T18:26:36.825110+0000","last_fullsized":"2026-03-20T18:26:36.825110+0000","mapping_epoch":15,"log_start":"16'100","ondisk_log_start":"16'100","created":15,"last_epoch_clean":16,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T18:26:30.772441+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T18:26:30.772441+0000","last_clean_scrub_stamp":"2026-03-20T18:26:30.772441+0000","objects_scrubbed":0,"log_size":92,"log_dups_size":100,"ondisk_log_size":92,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T00:19:32.541225+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":590368,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":126,"num_read_kb":109,"num_write":233,"num_write_kb":4760,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0],"acting":[7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":8,"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":38,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":3,"ondisk_log_size":3,"up":16,"acting":16,"num_store_stats":7},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":590368,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":126,"num_read_kb":109,"num_write":233,"num_write_kb":4760,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1187840,"data_stored":1180736,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":92,"ondisk_log_size":92,"up":2,"acting":2,"num_store_stats":2}],"osd_stats":[{"osd":7,"up_from":13,"seq":55834574851,"num_pgs":4,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27640,"kb_used_data":792,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344200,"statfs":{"total":96636764160,"available":96608460800,"internally_reserved":0,"allocated":811008,"data_stored":663659,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8772,"internal_metadata":27450812},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":13,"seq":55834574851,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27064,"kb_used_data":200,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":94344776,"statfs":{"total":96636764160,"available":96609050624,"internally_reserved":0,"allocated":204800,"data_stored":67475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":7476,"internal_metadata":27452108},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":13,"seq":55834574851,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27064,"kb_used_data":216,"kb_used_omap":9,"kb_used_meta":26806,"kb_avail":94344776,"statfs":{"total":96636764160,"available":96609050624,"internally_reserved":0,"allocated":221184,"data_stored":73310,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":9427,"internal_metadata":27450157},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":13,"seq":55834574851,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":94371840,"kb_used":27060,"kb_used_data":212,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":94344780,"statfs":{"total":96636764160,"available":96609054720,"internally_reserved":0,"allocated":217088,"data_stored":73291,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":7477,"internal_metadata":27452107},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]},{"osd":3,"up_from":13,"seq":55834574851,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27068,"kb_used_data":212,"kb_used_omap":6,"kb_used_meta":26809,"kb_avail":94344772,"statfs":{"total":96636764160,"available":96609046528,"internally_reserved":0,"allocated":217088,"data_stored":73291,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":6175,"internal_metadata":27453409},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":13,"seq":55834574851,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27040,"kb_used_data":200,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344800,"statfs":{"total":96636764160,"available":96609075200,"internally_reserved":0,"allocated":204800,"data_stored":67475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8777,"internal_metadata":27450807},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574851,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27064,"kb_used_data":216,"kb_used_omap":9,"kb_used_meta":26806,"kb_avail":94344776,"statfs":{"total":96636764160,"available":96609050624,"internally_reserved":0,"allocated":221184,"data_stored":73310,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":9427,"internal_metadata":27450157},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":13,"seq":55834574851,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27644,"kb_used_data":780,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":94344196,"statfs":{"total":96636764160,"available":96608456704,"internally_reserved":0,"allocated":798720,"data_stored":657843,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8127,"internal_metadata":27451457},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":593920,"data_stored":590368,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":593920,"data_stored":590368,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-20T18:26:40.135 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-03-20T18:26:40.343 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:40.344 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-20T18:26:40.356 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":20,"stamp":"2026-03-20T18:26:40.247509+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":590387,"num_objects":4,"num_object_clones":0,"num_object_copies":8,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":126,"num_read_kb":109,"num_write":235,"num_write_kb":4762,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":95,"ondisk_log_size":95,"up":18,"acting":18,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":18,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":7,"kb":754974720,"kb_used":217648,"kb_used_data":2864,"kb_used_omap":65,"kb_used_meta":214462,"kb_avail":754757072,"statfs":{"total":773094113280,"available":772871241728,"internally_reserved":0,"allocated":2932736,"data_stored":1767102,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":66958,"internal_metadata":219609714},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"5.441461"},"pg_stats":[{"pgid":"2.7","version":"0'0","reported_seq":20,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T18:26:37.167943+0000","last_change":"2026-03-20T18:26:37.168015+0000","last_active":"2026-03-20T18:26:37.167943+0000","last_peered":"2026-03-20T18:26:37.167943+0000","last_clean":"2026-03-20T18:26:37.167943+0000","last_became_active":"2026-03-20T18:26:34.808912+0000","last_became_peered":"2026-03-20T18:26:34.808912+0000","last_unstale":"2026-03-20T18:26:37.167943+0000","last_undegraded":"2026-03-20T18:26:37.167943+0000","last_fullsized":"2026-03-20T18:26:37.167943+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_clean_scrub_stamp":"2026-03-20T18:26:33.790794+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T00:02:10.147344+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00015741400000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,7],"acting":[6,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.6","version":"0'0","reported_seq":20,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T18:26:36.825216+0000","last_change":"2026-03-20T18:26:36.825351+0000","last_active":"2026-03-20T18:26:36.825216+0000","last_peered":"2026-03-20T18:26:36.825216+0000","last_clean":"2026-03-20T18:26:36.825216+0000","last_became_active":"2026-03-20T18:26:34.811439+0000","last_became_peered":"2026-03-20T18:26:34.811439+0000","last_unstale":"2026-03-20T18:26:36.825216+0000","last_undegraded":"2026-03-20T18:26:36.825216+0000","last_fullsized":"2026-03-20T18:26:36.825216+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_clean_scrub_stamp":"2026-03-20T18:26:33.790794+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T04:26:44.551572+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00098586599999999996,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6],"acting":[1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.5","version":"0'0","reported_seq":20,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T18:26:36.825358+0000","last_change":"2026-03-20T18:26:36.825482+0000","last_active":"2026-03-20T18:26:36.825358+0000","last_peered":"2026-03-20T18:26:36.825358+0000","last_clean":"2026-03-20T18:26:36.825358+0000","last_became_active":"2026-03-20T18:26:34.809668+0000","last_became_peered":"2026-03-20T18:26:34.809668+0000","last_unstale":"2026-03-20T18:26:36.825358+0000","last_undegraded":"2026-03-20T18:26:36.825358+0000","last_fullsized":"2026-03-20T18:26:36.825358+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_clean_scrub_stamp":"2026-03-20T18:26:33.790794+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T22:37:36.908011+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00049786199999999996,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0],"acting":[7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.4","version":"0'0","reported_seq":20,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T18:26:36.825242+0000","last_change":"2026-03-20T18:26:36.825353+0000","last_active":"2026-03-20T18:26:36.825242+0000","last_peered":"2026-03-20T18:26:36.825242+0000","last_clean":"2026-03-20T18:26:36.825242+0000","last_became_active":"2026-03-20T18:26:34.812023+0000","last_became_peered":"2026-03-20T18:26:34.812023+0000","last_unstale":"2026-03-20T18:26:36.825242+0000","last_undegraded":"2026-03-20T18:26:36.825242+0000","last_fullsized":"2026-03-20T18:26:36.825242+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_clean_scrub_stamp":"2026-03-20T18:26:33.790794+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T05:16:14.283906+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00085044099999999996,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.2","version":"21'2","reported_seq":22,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T18:26:36.828642+0000","last_change":"2026-03-20T18:26:36.828642+0000","last_active":"2026-03-20T18:26:36.828642+0000","last_peered":"2026-03-20T18:26:36.828642+0000","last_clean":"2026-03-20T18:26:36.828642+0000","last_became_active":"2026-03-20T18:26:34.811618+0000","last_became_peered":"2026-03-20T18:26:34.811618+0000","last_unstale":"2026-03-20T18:26:36.828642+0000","last_undegraded":"2026-03-20T18:26:36.828642+0000","last_fullsized":"2026-03-20T18:26:36.828642+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_clean_scrub_stamp":"2026-03-20T18:26:33.790794+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T22:01:30.487595+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00065448500000000001,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1],"acting":[5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.1","version":"0'0","reported_seq":20,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T18:26:37.073041+0000","last_change":"2026-03-20T18:26:37.073127+0000","last_active":"2026-03-20T18:26:37.073041+0000","last_peered":"2026-03-20T18:26:37.073041+0000","last_clean":"2026-03-20T18:26:37.073041+0000","last_became_active":"2026-03-20T18:26:35.142116+0000","last_became_peered":"2026-03-20T18:26:35.142116+0000","last_unstale":"2026-03-20T18:26:37.073041+0000","last_undegraded":"2026-03-20T18:26:37.073041+0000","last_fullsized":"2026-03-20T18:26:37.073041+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_clean_scrub_stamp":"2026-03-20T18:26:33.790794+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T23:41:44.440453+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00018643899999999999,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3],"acting":[2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.0","version":"0'0","reported_seq":20,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T18:26:36.825505+0000","last_change":"2026-03-20T18:26:36.825642+0000","last_active":"2026-03-20T18:26:36.825505+0000","last_peered":"2026-03-20T18:26:36.825505+0000","last_clean":"2026-03-20T18:26:36.825505+0000","last_became_active":"2026-03-20T18:26:34.812194+0000","last_became_peered":"2026-03-20T18:26:34.812194+0000","last_unstale":"2026-03-20T18:26:36.825505+0000","last_undegraded":"2026-03-20T18:26:36.825505+0000","last_fullsized":"2026-03-20T18:26:36.825505+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_clean_scrub_stamp":"2026-03-20T18:26:33.790794+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T05:18:49.487953+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00056843400000000004,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1],"acting":[7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.3","version":"19'1","reported_seq":21,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T18:26:36.828066+0000","last_change":"2026-03-20T18:26:36.828170+0000","last_active":"2026-03-20T18:26:36.828066+0000","last_peered":"2026-03-20T18:26:36.828066+0000","last_clean":"2026-03-20T18:26:36.828066+0000","last_became_active":"2026-03-20T18:26:34.809407+0000","last_became_peered":"2026-03-20T18:26:34.809407+0000","last_unstale":"2026-03-20T18:26:36.828066+0000","last_undegraded":"2026-03-20T18:26:36.828066+0000","last_fullsized":"2026-03-20T18:26:36.828066+0000","mapping_epoch":18,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":19,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T18:26:33.790794+0000","last_clean_scrub_stamp":"2026-03-20T18:26:33.790794+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T00:51:34.493085+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00021676599999999999,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2],"acting":[5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.0","version":"16'192","reported_seq":249,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-20T18:26:36.825110+0000","last_change":"2026-03-20T18:26:31.835164+0000","last_active":"2026-03-20T18:26:36.825110+0000","last_peered":"2026-03-20T18:26:36.825110+0000","last_clean":"2026-03-20T18:26:36.825110+0000","last_became_active":"2026-03-20T18:26:31.834640+0000","last_became_peered":"2026-03-20T18:26:31.834640+0000","last_unstale":"2026-03-20T18:26:36.825110+0000","last_undegraded":"2026-03-20T18:26:36.825110+0000","last_fullsized":"2026-03-20T18:26:36.825110+0000","mapping_epoch":15,"log_start":"16'100","ondisk_log_start":"16'100","created":15,"last_epoch_clean":16,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T18:26:30.772441+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T18:26:30.772441+0000","last_clean_scrub_stamp":"2026-03-20T18:26:30.772441+0000","objects_scrubbed":0,"log_size":92,"log_dups_size":100,"ondisk_log_size":92,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T00:19:32.541225+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":590368,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":126,"num_read_kb":109,"num_write":233,"num_write_kb":4760,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0],"acting":[7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":8,"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":38,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":3,"ondisk_log_size":3,"up":16,"acting":16,"num_store_stats":7},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":590368,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":126,"num_read_kb":109,"num_write":233,"num_write_kb":4760,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1187840,"data_stored":1180736,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":92,"ondisk_log_size":92,"up":2,"acting":2,"num_store_stats":2}],"osd_stats":[{"osd":7,"up_from":13,"seq":55834574852,"num_pgs":4,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27640,"kb_used_data":792,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344200,"statfs":{"total":96636764160,"available":96608460800,"internally_reserved":0,"allocated":811008,"data_stored":663659,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8772,"internal_metadata":27450812},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":13,"seq":55834574852,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27060,"kb_used_data":212,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":94344780,"statfs":{"total":96636764160,"available":96609054720,"internally_reserved":0,"allocated":217088,"data_stored":73291,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":7476,"internal_metadata":27452108},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":13,"seq":55834574852,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27064,"kb_used_data":216,"kb_used_omap":9,"kb_used_meta":26806,"kb_avail":94344776,"statfs":{"total":96636764160,"available":96609050624,"internally_reserved":0,"allocated":221184,"data_stored":73310,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":9427,"internal_metadata":27450157},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":13,"seq":55834574852,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":94371840,"kb_used":27060,"kb_used_data":212,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":94344780,"statfs":{"total":96636764160,"available":96609054720,"internally_reserved":0,"allocated":217088,"data_stored":73291,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":7477,"internal_metadata":27452107},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":13,"seq":55834574852,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27068,"kb_used_data":212,"kb_used_omap":6,"kb_used_meta":26809,"kb_avail":94344772,"statfs":{"total":96636764160,"available":96609046528,"internally_reserved":0,"allocated":217088,"data_stored":73291,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":6175,"internal_metadata":27453409},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":13,"seq":55834574852,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27060,"kb_used_data":212,"kb_used_omap":9,"kb_used_meta":26806,"kb_avail":94344780,"statfs":{"total":96636764160,"available":96609054720,"internally_reserved":0,"allocated":217088,"data_stored":73291,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":9427,"internal_metadata":27450157},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574852,"num_pgs":4,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27064,"kb_used_data":216,"kb_used_omap":9,"kb_used_meta":26806,"kb_avail":94344776,"statfs":{"total":96636764160,"available":96609050624,"internally_reserved":0,"allocated":221184,"data_stored":73310,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":9427,"internal_metadata":27450157},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":13,"seq":55834574852,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27632,"kb_used_data":792,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344208,"statfs":{"total":96636764160,"available":96608468992,"internally_reserved":0,"allocated":811008,"data_stored":663659,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8777,"internal_metadata":27450807},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":593920,"data_stored":590368,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":593920,"data_stored":590368,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-20T18:26:40.357 INFO:tasks.ceph.ceph_manager.ceph:clean! 2026-03-20T18:26:40.357 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-20T18:26:40.357 INFO:tasks.ceph.ceph_manager.ceph:wait_until_healthy 2026-03-20T18:26:40.357 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph health --format=json 2026-03-20T18:26:40.607 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:26:40.607 INFO:teuthology.orchestra.run.vm00.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-20T18:26:40.622 INFO:tasks.ceph.ceph_manager.ceph:wait_until_healthy done 2026-03-20T18:26:40.622 INFO:teuthology.run_tasks:Running task openssl_keys... 2026-03-20T18:26:40.625 INFO:teuthology.run_tasks:Running task rgw... 2026-03-20T18:26:40.629 DEBUG:tasks.rgw:config is {'client.0': None, 'client.1': None, 'client.2': None} 2026-03-20T18:26:40.629 DEBUG:tasks.rgw:client list is dict_keys(['client.0', 'client.1', 'client.2']) 2026-03-20T18:26:40.629 INFO:tasks.rgw:Creating data pools 2026-03-20T18:26:40.629 DEBUG:tasks.rgw:Obtaining remote for client client.0 2026-03-20T18:26:40.629 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph osd pool create default.rgw.buckets.data 64 64 --cluster ceph 2026-03-20T18:26:40.891 INFO:teuthology.orchestra.run.vm00.stderr:pool 'default.rgw.buckets.data' created 2026-03-20T18:26:40.912 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph osd pool application enable default.rgw.buckets.data rgw --cluster ceph 2026-03-20T18:26:41.894 INFO:teuthology.orchestra.run.vm00.stderr:enabled application 'rgw' on pool 'default.rgw.buckets.data' 2026-03-20T18:26:41.928 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph osd pool create default.rgw.buckets.index 64 64 --cluster ceph 2026-03-20T18:26:42.909 INFO:teuthology.orchestra.run.vm00.stderr:pool 'default.rgw.buckets.index' created 2026-03-20T18:26:42.926 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph osd pool application enable default.rgw.buckets.index rgw --cluster ceph 2026-03-20T18:26:43.512 INFO:teuthology.orchestra.run.vm00.stderr:enabled application 'rgw' on pool 'default.rgw.buckets.index' 2026-03-20T18:26:43.525 DEBUG:tasks.rgw:Obtaining remote for client client.1 2026-03-20T18:26:43.525 DEBUG:teuthology.orchestra.run.vm02:> sudo ceph osd pool create default.rgw.buckets.data 64 64 --cluster ceph 2026-03-20T18:26:43.731 INFO:teuthology.orchestra.run.vm02.stderr:pool 'default.rgw.buckets.data' already exists 2026-03-20T18:26:43.746 DEBUG:teuthology.orchestra.run.vm02:> sudo ceph osd pool application enable default.rgw.buckets.data rgw --cluster ceph 2026-03-20T18:26:44.491 INFO:teuthology.orchestra.run.vm02.stderr:enabled application 'rgw' on pool 'default.rgw.buckets.data' 2026-03-20T18:26:44.503 DEBUG:teuthology.orchestra.run.vm02:> sudo ceph osd pool create default.rgw.buckets.index 64 64 --cluster ceph 2026-03-20T18:26:44.705 INFO:teuthology.orchestra.run.vm02.stderr:pool 'default.rgw.buckets.index' already exists 2026-03-20T18:26:44.718 DEBUG:teuthology.orchestra.run.vm02:> sudo ceph osd pool application enable default.rgw.buckets.index rgw --cluster ceph 2026-03-20T18:26:45.923 INFO:teuthology.orchestra.run.vm02.stderr:enabled application 'rgw' on pool 'default.rgw.buckets.index' 2026-03-20T18:26:45.935 DEBUG:tasks.rgw:Obtaining remote for client client.2 2026-03-20T18:26:45.935 DEBUG:teuthology.orchestra.run.vm05:> sudo ceph osd pool create default.rgw.buckets.data 64 64 --cluster ceph 2026-03-20T18:26:46.152 INFO:teuthology.orchestra.run.vm05.stderr:pool 'default.rgw.buckets.data' already exists 2026-03-20T18:26:46.165 DEBUG:teuthology.orchestra.run.vm05:> sudo ceph osd pool application enable default.rgw.buckets.data rgw --cluster ceph 2026-03-20T18:26:46.927 INFO:teuthology.orchestra.run.vm05.stderr:enabled application 'rgw' on pool 'default.rgw.buckets.data' 2026-03-20T18:26:46.941 DEBUG:teuthology.orchestra.run.vm05:> sudo ceph osd pool create default.rgw.buckets.index 64 64 --cluster ceph 2026-03-20T18:26:47.138 INFO:teuthology.orchestra.run.vm05.stderr:pool 'default.rgw.buckets.index' already exists 2026-03-20T18:26:47.151 DEBUG:teuthology.orchestra.run.vm05:> sudo ceph osd pool application enable default.rgw.buckets.index rgw --cluster ceph 2026-03-20T18:26:47.936 INFO:teuthology.orchestra.run.vm05.stderr:enabled application 'rgw' on pool 'default.rgw.buckets.index' 2026-03-20T18:26:47.950 DEBUG:tasks.rgw:Pools created 2026-03-20T18:26:47.950 INFO:tasks.util.rgw:rgwadmin: client.0 : ['user', 'list'] 2026-03-20T18:26:47.950 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.0', '--cluster', 'ceph', 'user', 'list'] 2026-03-20T18:26:47.950 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph user list 2026-03-20T18:26:47.989 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-03-20T18:26:47.989 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T18:26:50.008 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.006+0000 7f7706d84900 20 rados->read ofs=0 len=0 2026-03-20T18:26:50.009 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.008+0000 7f7706d84900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:50.010 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.008+0000 7f7706d84900 20 realm 2026-03-20T18:26:50.010 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.008+0000 7f7706d84900 20 rados->read ofs=0 len=0 2026-03-20T18:26:50.010 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.008+0000 7f7706d84900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:50.010 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.008+0000 7f7706d84900 4 RGWPeriod::init failed to init realm id : (2) No such file or directory 2026-03-20T18:26:50.010 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.008+0000 7f7706d84900 20 rados->read ofs=0 len=0 2026-03-20T18:26:50.010 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.008+0000 7f7706d84900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:50.010 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.008+0000 7f7706d84900 20 rados->read ofs=0 len=0 2026-03-20T18:26:50.011 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.009+0000 7f7706d84900 20 rados_obj.operate() r=0 bl.length=46 2026-03-20T18:26:50.011 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.009+0000 7f7706d84900 20 rados->read ofs=0 len=0 2026-03-20T18:26:50.011 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.010+0000 7f7706d84900 20 rados_obj.operate() r=0 bl.length=1060 2026-03-20T18:26:50.011 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.010+0000 7f7706d84900 20 searching for the correct realm 2026-03-20T18:26:50.022 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.021+0000 7f7706d84900 20 RGWRados::pool_iterate: got default.zonegroup. 2026-03-20T18:26:50.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.021+0000 7f7706d84900 20 RGWRados::pool_iterate: got zonegroup_info.1f41abd1-1863-43a5-a3b4-57a42098bd37 2026-03-20T18:26:50.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.021+0000 7f7706d84900 20 RGWRados::pool_iterate: got zone_info.06c127a6-ead2-4613-b3e9-3a45595f4b52 2026-03-20T18:26:50.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.021+0000 7f7706d84900 20 RGWRados::pool_iterate: got default.zone. 2026-03-20T18:26:50.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.021+0000 7f7706d84900 20 RGWRados::pool_iterate: got zone_names.default 2026-03-20T18:26:50.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.021+0000 7f7706d84900 20 RGWRados::pool_iterate: got zonegroups_names.default 2026-03-20T18:26:50.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.021+0000 7f7706d84900 20 rados->read ofs=0 len=0 2026-03-20T18:26:50.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.021+0000 7f7706d84900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:50.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.021+0000 7f7706d84900 20 rados->read ofs=0 len=0 2026-03-20T18:26:50.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.021+0000 7f7706d84900 20 rados_obj.operate() r=0 bl.length=46 2026-03-20T18:26:50.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.021+0000 7f7706d84900 20 rados->read ofs=0 len=0 2026-03-20T18:26:50.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.022+0000 7f7706d84900 20 rados_obj.operate() r=0 bl.length=436 2026-03-20T18:26:50.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.022+0000 7f7706d84900 20 zone default found 2026-03-20T18:26:50.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.022+0000 7f7706d84900 4 Realm: () 2026-03-20T18:26:50.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.022+0000 7f7706d84900 4 ZoneGroup: default (1f41abd1-1863-43a5-a3b4-57a42098bd37) 2026-03-20T18:26:50.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.022+0000 7f7706d84900 4 Zone: default (06c127a6-ead2-4613-b3e9-3a45595f4b52) 2026-03-20T18:26:50.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.022+0000 7f7706d84900 10 cannot find current period zonegroup using local zonegroup configuration 2026-03-20T18:26:50.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.022+0000 7f7706d84900 20 zonegroup default 2026-03-20T18:26:50.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.022+0000 7f7706d84900 20 rados->read ofs=0 len=0 2026-03-20T18:26:50.024 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.022+0000 7f7706d84900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:50.024 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:50.022+0000 7f7706d84900 20 rados->read ofs=0 len=0 2026-03-20T18:26:51.976 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:51.974+0000 7f7706d84900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:51.976 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:51.974+0000 7f7706d84900 20 rados->read ofs=0 len=0 2026-03-20T18:26:51.976 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:51.975+0000 7f7706d84900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:51.976 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:51.975+0000 7f7706d84900 20 started sync module instance, tier type = 2026-03-20T18:26:51.976 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:51.975+0000 7f7706d84900 20 started zone id=06c127a6-ead2-4613-b3e9-3a45595f4b52 (name=default) with tier type = 2026-03-20T18:26:54.528 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.526+0000 7f7706d84900 20 add_watcher() i=3 2026-03-20T18:26:54.528 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.526+0000 7f7706d84900 20 add_watcher() i=0 2026-03-20T18:26:54.535 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.533+0000 7f7706d84900 20 add_watcher() i=7 2026-03-20T18:26:54.557 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.555+0000 7f7706d84900 20 add_watcher() i=4 2026-03-20T18:26:54.557 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.555+0000 7f7706d84900 20 add_watcher() i=2 2026-03-20T18:26:54.557 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.555+0000 7f7706d84900 20 add_watcher() i=1 2026-03-20T18:26:54.558 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.556+0000 7f7706d84900 20 add_watcher() i=5 2026-03-20T18:26:54.558 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.556+0000 7f7706d84900 20 add_watcher() i=6 2026-03-20T18:26:54.558 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.556+0000 7f7706d84900 2 all 8 watchers are set, enabling cache 2026-03-20T18:26:54.559 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.558+0000 7f76ebfff640 5 boost::asio::awaitable, obj_version> > logback_generations::read(const DoutPrefixProvider*):446: oid=data_loggenerations_metadata not found 2026-03-20T18:26:54.559 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.558+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.0 2026-03-20T18:26:54.559 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.558+0000 7f76ebfff640 20 do_open: entering 2026-03-20T18:26:54.560 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.559+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.0 does not exist 2026-03-20T18:26:54.560 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.559+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.1 2026-03-20T18:26:54.560 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.559+0000 7f76eb7fe640 20 do_open: entering 2026-03-20T18:26:54.562 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.560+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.1 does not exist 2026-03-20T18:26:54.562 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.560+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.2 2026-03-20T18:26:54.562 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.560+0000 7f76eaffd640 20 do_open: entering 2026-03-20T18:26:54.562 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.561+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.2 does not exist 2026-03-20T18:26:54.562 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.561+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.3 2026-03-20T18:26:54.562 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.561+0000 7f7703ae6640 20 do_open: entering 2026-03-20T18:26:54.563 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.562+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.3 does not exist 2026-03-20T18:26:54.563 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.562+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.4 2026-03-20T18:26:54.563 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.562+0000 7f7701ae2640 20 do_open: entering 2026-03-20T18:26:54.563 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.562+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.4 does not exist 2026-03-20T18:26:54.563 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.562+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.5 2026-03-20T18:26:54.563 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.562+0000 7f77012e1640 20 do_open: entering 2026-03-20T18:26:54.564 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.563+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.5 does not exist 2026-03-20T18:26:54.564 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.563+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.6 2026-03-20T18:26:54.564 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.563+0000 7f7700ae0640 20 do_open: entering 2026-03-20T18:26:54.565 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.563+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.6 does not exist 2026-03-20T18:26:54.565 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.563+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.7 2026-03-20T18:26:54.565 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.563+0000 7f7704d6f640 20 do_open: entering 2026-03-20T18:26:54.565 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.563+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.7 does not exist 2026-03-20T18:26:54.565 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.563+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.8 2026-03-20T18:26:54.565 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.563+0000 7f76ebfff640 20 do_open: entering 2026-03-20T18:26:54.565 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.564+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.8 does not exist 2026-03-20T18:26:54.565 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.564+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.9 2026-03-20T18:26:54.565 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.564+0000 7f76eb7fe640 20 do_open: entering 2026-03-20T18:26:54.565 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.564+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.9 does not exist 2026-03-20T18:26:54.565 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.564+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.10 2026-03-20T18:26:54.565 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.564+0000 7f76eaffd640 20 do_open: entering 2026-03-20T18:26:54.566 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.565+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.10 does not exist 2026-03-20T18:26:54.566 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.565+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.11 2026-03-20T18:26:54.566 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.565+0000 7f7703ae6640 20 do_open: entering 2026-03-20T18:26:54.566 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.565+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.11 does not exist 2026-03-20T18:26:54.566 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.565+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.12 2026-03-20T18:26:54.566 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.565+0000 7f7701ae2640 20 do_open: entering 2026-03-20T18:26:54.567 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.565+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.12 does not exist 2026-03-20T18:26:54.567 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.565+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.13 2026-03-20T18:26:54.567 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.565+0000 7f77012e1640 20 do_open: entering 2026-03-20T18:26:54.567 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.566+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.13 does not exist 2026-03-20T18:26:54.567 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.566+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.14 2026-03-20T18:26:54.567 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.566+0000 7f7700ae0640 20 do_open: entering 2026-03-20T18:26:54.567 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.566+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.14 does not exist 2026-03-20T18:26:54.567 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.566+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.15 2026-03-20T18:26:54.567 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.566+0000 7f7704d6f640 20 do_open: entering 2026-03-20T18:26:54.567 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.566+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.15 does not exist 2026-03-20T18:26:54.567 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.566+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.16 2026-03-20T18:26:54.567 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.566+0000 7f76ebfff640 20 do_open: entering 2026-03-20T18:26:54.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.566+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.16 does not exist 2026-03-20T18:26:54.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.566+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.17 2026-03-20T18:26:54.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.566+0000 7f76eb7fe640 20 do_open: entering 2026-03-20T18:26:54.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.566+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.17 does not exist 2026-03-20T18:26:54.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.566+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.18 2026-03-20T18:26:54.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.566+0000 7f76eaffd640 20 do_open: entering 2026-03-20T18:26:54.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.567+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.18 does not exist 2026-03-20T18:26:54.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.567+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.19 2026-03-20T18:26:54.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.567+0000 7f7703ae6640 20 do_open: entering 2026-03-20T18:26:54.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.567+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.19 does not exist 2026-03-20T18:26:54.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.567+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.20 2026-03-20T18:26:54.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.567+0000 7f7701ae2640 20 do_open: entering 2026-03-20T18:26:54.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.567+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.20 does not exist 2026-03-20T18:26:54.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.567+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.21 2026-03-20T18:26:54.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.567+0000 7f77012e1640 20 do_open: entering 2026-03-20T18:26:54.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.567+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.21 does not exist 2026-03-20T18:26:54.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.567+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.22 2026-03-20T18:26:54.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.567+0000 7f7700ae0640 20 do_open: entering 2026-03-20T18:26:54.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.568+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.22 does not exist 2026-03-20T18:26:54.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.568+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.23 2026-03-20T18:26:54.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.568+0000 7f7704d6f640 20 do_open: entering 2026-03-20T18:26:54.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.568+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.23 does not exist 2026-03-20T18:26:54.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.568+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.24 2026-03-20T18:26:54.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.568+0000 7f76ebfff640 20 do_open: entering 2026-03-20T18:26:54.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.568+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.24 does not exist 2026-03-20T18:26:54.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.568+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.25 2026-03-20T18:26:54.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.568+0000 7f76eb7fe640 20 do_open: entering 2026-03-20T18:26:54.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.568+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.25 does not exist 2026-03-20T18:26:54.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.568+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.26 2026-03-20T18:26:54.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.568+0000 7f76eaffd640 20 do_open: entering 2026-03-20T18:26:54.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.569+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.26 does not exist 2026-03-20T18:26:54.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.569+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.27 2026-03-20T18:26:54.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.569+0000 7f7703ae6640 20 do_open: entering 2026-03-20T18:26:54.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.569+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.27 does not exist 2026-03-20T18:26:54.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.569+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.28 2026-03-20T18:26:54.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.569+0000 7f7701ae2640 20 do_open: entering 2026-03-20T18:26:54.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.569+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.28 does not exist 2026-03-20T18:26:54.571 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.569+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.29 2026-03-20T18:26:54.571 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.569+0000 7f77012e1640 20 do_open: entering 2026-03-20T18:26:54.571 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.569+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.29 does not exist 2026-03-20T18:26:54.571 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.569+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.30 2026-03-20T18:26:54.571 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.569+0000 7f7700ae0640 20 do_open: entering 2026-03-20T18:26:54.571 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.570+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.30 does not exist 2026-03-20T18:26:54.571 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.570+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.31 2026-03-20T18:26:54.571 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.570+0000 7f7704d6f640 20 do_open: entering 2026-03-20T18:26:54.571 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.570+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.31 does not exist 2026-03-20T18:26:54.571 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.570+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.32 2026-03-20T18:26:54.571 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.570+0000 7f76ebfff640 20 do_open: entering 2026-03-20T18:26:54.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.570+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.32 does not exist 2026-03-20T18:26:54.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.570+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.33 2026-03-20T18:26:54.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.570+0000 7f76eb7fe640 20 do_open: entering 2026-03-20T18:26:54.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.571+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.33 does not exist 2026-03-20T18:26:54.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.571+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.34 2026-03-20T18:26:54.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.571+0000 7f76eaffd640 20 do_open: entering 2026-03-20T18:26:54.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.571+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.34 does not exist 2026-03-20T18:26:54.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.571+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.35 2026-03-20T18:26:54.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.571+0000 7f7703ae6640 20 do_open: entering 2026-03-20T18:26:54.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.571+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.35 does not exist 2026-03-20T18:26:54.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.571+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.36 2026-03-20T18:26:54.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.571+0000 7f7701ae2640 20 do_open: entering 2026-03-20T18:26:54.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.571+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.36 does not exist 2026-03-20T18:26:54.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.571+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.37 2026-03-20T18:26:54.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.571+0000 7f77012e1640 20 do_open: entering 2026-03-20T18:26:54.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.572+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.37 does not exist 2026-03-20T18:26:54.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.572+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.38 2026-03-20T18:26:54.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.572+0000 7f7700ae0640 20 do_open: entering 2026-03-20T18:26:54.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.572+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.38 does not exist 2026-03-20T18:26:54.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.572+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.39 2026-03-20T18:26:54.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.572+0000 7f7704d6f640 20 do_open: entering 2026-03-20T18:26:54.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.572+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.39 does not exist 2026-03-20T18:26:54.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.572+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.40 2026-03-20T18:26:54.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.572+0000 7f76ebfff640 20 do_open: entering 2026-03-20T18:26:54.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.572+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.40 does not exist 2026-03-20T18:26:54.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.572+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.41 2026-03-20T18:26:54.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.572+0000 7f76eb7fe640 20 do_open: entering 2026-03-20T18:26:54.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.573+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.41 does not exist 2026-03-20T18:26:54.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.573+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.42 2026-03-20T18:26:54.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.573+0000 7f76eaffd640 20 do_open: entering 2026-03-20T18:26:54.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.573+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.42 does not exist 2026-03-20T18:26:54.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.573+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.43 2026-03-20T18:26:54.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.573+0000 7f7703ae6640 20 do_open: entering 2026-03-20T18:26:54.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.573+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.43 does not exist 2026-03-20T18:26:54.575 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.573+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.44 2026-03-20T18:26:54.575 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.573+0000 7f7701ae2640 20 do_open: entering 2026-03-20T18:26:54.575 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.573+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.44 does not exist 2026-03-20T18:26:54.575 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.573+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.45 2026-03-20T18:26:54.575 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.573+0000 7f77012e1640 20 do_open: entering 2026-03-20T18:26:54.575 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.574+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.45 does not exist 2026-03-20T18:26:54.575 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.574+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.46 2026-03-20T18:26:54.575 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.574+0000 7f7700ae0640 20 do_open: entering 2026-03-20T18:26:54.575 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.574+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.46 does not exist 2026-03-20T18:26:54.575 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.574+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.47 2026-03-20T18:26:54.575 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.574+0000 7f7704d6f640 20 do_open: entering 2026-03-20T18:26:54.575 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.574+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.47 does not exist 2026-03-20T18:26:54.576 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.574+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.48 2026-03-20T18:26:54.576 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.574+0000 7f76ebfff640 20 do_open: entering 2026-03-20T18:26:54.576 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.574+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.48 does not exist 2026-03-20T18:26:54.576 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.574+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.49 2026-03-20T18:26:54.576 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.574+0000 7f76eb7fe640 20 do_open: entering 2026-03-20T18:26:54.576 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.575+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.49 does not exist 2026-03-20T18:26:54.576 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.575+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.50 2026-03-20T18:26:54.576 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.575+0000 7f76eaffd640 20 do_open: entering 2026-03-20T18:26:54.576 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.575+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.50 does not exist 2026-03-20T18:26:54.576 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.575+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.51 2026-03-20T18:26:54.576 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.575+0000 7f7703ae6640 20 do_open: entering 2026-03-20T18:26:54.577 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.575+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.51 does not exist 2026-03-20T18:26:54.577 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.575+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.52 2026-03-20T18:26:54.577 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.575+0000 7f7701ae2640 20 do_open: entering 2026-03-20T18:26:54.577 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.575+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.52 does not exist 2026-03-20T18:26:54.577 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.575+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.53 2026-03-20T18:26:54.577 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.575+0000 7f77012e1640 20 do_open: entering 2026-03-20T18:26:54.577 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.576+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.53 does not exist 2026-03-20T18:26:54.577 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.576+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.54 2026-03-20T18:26:54.577 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.576+0000 7f7700ae0640 20 do_open: entering 2026-03-20T18:26:54.577 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.576+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.54 does not exist 2026-03-20T18:26:54.577 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.576+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.55 2026-03-20T18:26:54.577 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.576+0000 7f7704d6f640 20 do_open: entering 2026-03-20T18:26:54.578 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.576+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.55 does not exist 2026-03-20T18:26:54.578 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.576+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.56 2026-03-20T18:26:54.578 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.576+0000 7f76ebfff640 20 do_open: entering 2026-03-20T18:26:54.578 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.576+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.56 does not exist 2026-03-20T18:26:54.578 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.576+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.57 2026-03-20T18:26:54.578 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.576+0000 7f76eb7fe640 20 do_open: entering 2026-03-20T18:26:54.578 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.577+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.57 does not exist 2026-03-20T18:26:54.578 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.577+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.58 2026-03-20T18:26:54.578 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.577+0000 7f76eaffd640 20 do_open: entering 2026-03-20T18:26:54.578 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.577+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.58 does not exist 2026-03-20T18:26:54.578 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.577+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.59 2026-03-20T18:26:54.578 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.577+0000 7f7703ae6640 20 do_open: entering 2026-03-20T18:26:54.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.577+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.59 does not exist 2026-03-20T18:26:54.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.577+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.60 2026-03-20T18:26:54.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.577+0000 7f7701ae2640 20 do_open: entering 2026-03-20T18:26:54.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.577+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.60 does not exist 2026-03-20T18:26:54.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.577+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.61 2026-03-20T18:26:54.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.577+0000 7f77012e1640 20 do_open: entering 2026-03-20T18:26:54.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.578+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.61 does not exist 2026-03-20T18:26:54.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.578+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.62 2026-03-20T18:26:54.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.578+0000 7f7700ae0640 20 do_open: entering 2026-03-20T18:26:54.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.578+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.62 does not exist 2026-03-20T18:26:54.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.578+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.63 2026-03-20T18:26:54.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.578+0000 7f7704d6f640 20 do_open: entering 2026-03-20T18:26:54.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.578+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.63 does not exist 2026-03-20T18:26:54.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.578+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.64 2026-03-20T18:26:54.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.578+0000 7f76ebfff640 20 do_open: entering 2026-03-20T18:26:54.580 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.578+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.64 does not exist 2026-03-20T18:26:54.580 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.578+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.65 2026-03-20T18:26:54.580 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.578+0000 7f76eb7fe640 20 do_open: entering 2026-03-20T18:26:54.580 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.578+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.65 does not exist 2026-03-20T18:26:54.580 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.578+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.66 2026-03-20T18:26:54.580 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.578+0000 7f76eaffd640 20 do_open: entering 2026-03-20T18:26:54.580 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.579+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.66 does not exist 2026-03-20T18:26:54.580 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.579+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.67 2026-03-20T18:26:54.580 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.579+0000 7f7703ae6640 20 do_open: entering 2026-03-20T18:26:54.580 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.579+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.67 does not exist 2026-03-20T18:26:54.580 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.579+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.68 2026-03-20T18:26:54.580 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.579+0000 7f7701ae2640 20 do_open: entering 2026-03-20T18:26:54.580 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.579+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.68 does not exist 2026-03-20T18:26:54.580 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.579+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.69 2026-03-20T18:26:54.580 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.579+0000 7f77012e1640 20 do_open: entering 2026-03-20T18:26:54.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.579+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.69 does not exist 2026-03-20T18:26:54.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.579+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.70 2026-03-20T18:26:54.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.579+0000 7f7700ae0640 20 do_open: entering 2026-03-20T18:26:54.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.580+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.70 does not exist 2026-03-20T18:26:54.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.580+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.71 2026-03-20T18:26:54.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.580+0000 7f7704d6f640 20 do_open: entering 2026-03-20T18:26:54.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.580+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.71 does not exist 2026-03-20T18:26:54.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.580+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.72 2026-03-20T18:26:54.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.580+0000 7f76ebfff640 20 do_open: entering 2026-03-20T18:26:54.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.580+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.72 does not exist 2026-03-20T18:26:54.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.580+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.73 2026-03-20T18:26:54.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.580+0000 7f76eb7fe640 20 do_open: entering 2026-03-20T18:26:54.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.580+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.73 does not exist 2026-03-20T18:26:54.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.580+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.74 2026-03-20T18:26:54.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.580+0000 7f76eaffd640 20 do_open: entering 2026-03-20T18:26:54.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.580+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.74 does not exist 2026-03-20T18:26:54.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.580+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.75 2026-03-20T18:26:54.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.580+0000 7f7703ae6640 20 do_open: entering 2026-03-20T18:26:54.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.581+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.75 does not exist 2026-03-20T18:26:54.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.581+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.76 2026-03-20T18:26:54.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.581+0000 7f7701ae2640 20 do_open: entering 2026-03-20T18:26:54.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.581+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.76 does not exist 2026-03-20T18:26:54.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.581+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.77 2026-03-20T18:26:54.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.581+0000 7f77012e1640 20 do_open: entering 2026-03-20T18:26:54.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.581+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.77 does not exist 2026-03-20T18:26:54.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.581+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.78 2026-03-20T18:26:54.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.581+0000 7f7700ae0640 20 do_open: entering 2026-03-20T18:26:54.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.581+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.78 does not exist 2026-03-20T18:26:54.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.581+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.79 2026-03-20T18:26:54.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.581+0000 7f7704d6f640 20 do_open: entering 2026-03-20T18:26:54.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.582+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.79 does not exist 2026-03-20T18:26:54.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.582+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.80 2026-03-20T18:26:54.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.582+0000 7f76ebfff640 20 do_open: entering 2026-03-20T18:26:54.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.582+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.80 does not exist 2026-03-20T18:26:54.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.582+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.81 2026-03-20T18:26:54.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.582+0000 7f76eb7fe640 20 do_open: entering 2026-03-20T18:26:54.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.582+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.81 does not exist 2026-03-20T18:26:54.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.582+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.82 2026-03-20T18:26:54.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.582+0000 7f76eaffd640 20 do_open: entering 2026-03-20T18:26:54.584 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.582+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.82 does not exist 2026-03-20T18:26:54.584 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.582+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.83 2026-03-20T18:26:54.584 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.582+0000 7f7703ae6640 20 do_open: entering 2026-03-20T18:26:54.584 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.583+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.83 does not exist 2026-03-20T18:26:54.584 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.583+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.84 2026-03-20T18:26:54.584 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.583+0000 7f7701ae2640 20 do_open: entering 2026-03-20T18:26:54.584 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.583+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.84 does not exist 2026-03-20T18:26:54.584 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.583+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.85 2026-03-20T18:26:54.584 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.583+0000 7f77012e1640 20 do_open: entering 2026-03-20T18:26:54.584 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.583+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.85 does not exist 2026-03-20T18:26:54.585 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.583+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.86 2026-03-20T18:26:54.585 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.583+0000 7f7700ae0640 20 do_open: entering 2026-03-20T18:26:54.585 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.583+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.86 does not exist 2026-03-20T18:26:54.585 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.583+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.87 2026-03-20T18:26:54.585 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.583+0000 7f7704d6f640 20 do_open: entering 2026-03-20T18:26:54.585 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.584+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.87 does not exist 2026-03-20T18:26:54.585 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.584+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.88 2026-03-20T18:26:54.585 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.584+0000 7f76ebfff640 20 do_open: entering 2026-03-20T18:26:54.585 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.584+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.88 does not exist 2026-03-20T18:26:54.585 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.584+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.89 2026-03-20T18:26:54.585 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.584+0000 7f76eb7fe640 20 do_open: entering 2026-03-20T18:26:54.586 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.584+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.89 does not exist 2026-03-20T18:26:54.586 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.584+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.90 2026-03-20T18:26:54.586 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.584+0000 7f76eaffd640 20 do_open: entering 2026-03-20T18:26:54.586 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.585+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.90 does not exist 2026-03-20T18:26:54.586 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.585+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.91 2026-03-20T18:26:54.586 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.585+0000 7f7703ae6640 20 do_open: entering 2026-03-20T18:26:54.586 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.585+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.91 does not exist 2026-03-20T18:26:54.586 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.585+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.92 2026-03-20T18:26:54.586 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.585+0000 7f7701ae2640 20 do_open: entering 2026-03-20T18:26:54.586 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.585+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.92 does not exist 2026-03-20T18:26:54.586 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.585+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.93 2026-03-20T18:26:54.586 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.585+0000 7f77012e1640 20 do_open: entering 2026-03-20T18:26:54.587 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.585+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.93 does not exist 2026-03-20T18:26:54.587 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.585+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.94 2026-03-20T18:26:54.587 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.585+0000 7f7700ae0640 20 do_open: entering 2026-03-20T18:26:54.587 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.585+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.94 does not exist 2026-03-20T18:26:54.587 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.585+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.95 2026-03-20T18:26:54.587 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.585+0000 7f7704d6f640 20 do_open: entering 2026-03-20T18:26:54.587 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.586+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.95 does not exist 2026-03-20T18:26:54.587 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.586+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.96 2026-03-20T18:26:54.587 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.586+0000 7f76ebfff640 20 do_open: entering 2026-03-20T18:26:54.587 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.586+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.96 does not exist 2026-03-20T18:26:54.587 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.586+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.97 2026-03-20T18:26:54.587 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.586+0000 7f76eb7fe640 20 do_open: entering 2026-03-20T18:26:54.588 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.586+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.97 does not exist 2026-03-20T18:26:54.588 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.586+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.98 2026-03-20T18:26:54.588 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.586+0000 7f76eaffd640 20 do_open: entering 2026-03-20T18:26:54.588 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.587+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.98 does not exist 2026-03-20T18:26:54.588 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.587+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.99 2026-03-20T18:26:54.588 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.587+0000 7f7703ae6640 20 do_open: entering 2026-03-20T18:26:54.588 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.587+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.99 does not exist 2026-03-20T18:26:54.588 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.587+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.100 2026-03-20T18:26:54.588 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.587+0000 7f7701ae2640 20 do_open: entering 2026-03-20T18:26:54.588 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.587+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.100 does not exist 2026-03-20T18:26:54.588 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.587+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.101 2026-03-20T18:26:54.588 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.587+0000 7f77012e1640 20 do_open: entering 2026-03-20T18:26:54.589 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.587+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.101 does not exist 2026-03-20T18:26:54.589 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.587+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.102 2026-03-20T18:26:54.589 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.587+0000 7f7700ae0640 20 do_open: entering 2026-03-20T18:26:54.589 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.588+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.102 does not exist 2026-03-20T18:26:54.589 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.588+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.103 2026-03-20T18:26:54.589 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.588+0000 7f7704d6f640 20 do_open: entering 2026-03-20T18:26:54.589 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.588+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.103 does not exist 2026-03-20T18:26:54.589 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.588+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.104 2026-03-20T18:26:54.589 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.588+0000 7f76ebfff640 20 do_open: entering 2026-03-20T18:26:54.589 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.588+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.104 does not exist 2026-03-20T18:26:54.589 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.588+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.105 2026-03-20T18:26:54.589 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.588+0000 7f76eb7fe640 20 do_open: entering 2026-03-20T18:26:54.590 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.588+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.105 does not exist 2026-03-20T18:26:54.590 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.588+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.106 2026-03-20T18:26:54.590 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.588+0000 7f76eaffd640 20 do_open: entering 2026-03-20T18:26:54.590 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.588+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.106 does not exist 2026-03-20T18:26:54.590 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.588+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.107 2026-03-20T18:26:54.590 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.588+0000 7f7703ae6640 20 do_open: entering 2026-03-20T18:26:54.590 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.589+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.107 does not exist 2026-03-20T18:26:54.590 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.589+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.108 2026-03-20T18:26:54.590 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.589+0000 7f7701ae2640 20 do_open: entering 2026-03-20T18:26:54.590 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.589+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.108 does not exist 2026-03-20T18:26:54.590 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.589+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.109 2026-03-20T18:26:54.590 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.589+0000 7f77012e1640 20 do_open: entering 2026-03-20T18:26:54.590 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.589+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.109 does not exist 2026-03-20T18:26:54.591 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.589+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.110 2026-03-20T18:26:54.591 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.589+0000 7f7700ae0640 20 do_open: entering 2026-03-20T18:26:54.591 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.589+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.110 does not exist 2026-03-20T18:26:54.591 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.589+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.111 2026-03-20T18:26:54.591 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.589+0000 7f7704d6f640 20 do_open: entering 2026-03-20T18:26:54.591 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.590+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.111 does not exist 2026-03-20T18:26:54.591 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.590+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.112 2026-03-20T18:26:54.591 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.590+0000 7f76ebfff640 20 do_open: entering 2026-03-20T18:26:54.591 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.590+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.112 does not exist 2026-03-20T18:26:54.591 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.590+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.113 2026-03-20T18:26:54.591 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.590+0000 7f76eb7fe640 20 do_open: entering 2026-03-20T18:26:54.591 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.590+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.113 does not exist 2026-03-20T18:26:54.591 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.590+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.114 2026-03-20T18:26:54.591 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.590+0000 7f76eaffd640 20 do_open: entering 2026-03-20T18:26:54.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.590+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.114 does not exist 2026-03-20T18:26:54.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.590+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.115 2026-03-20T18:26:54.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.590+0000 7f7703ae6640 20 do_open: entering 2026-03-20T18:26:54.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.591+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.115 does not exist 2026-03-20T18:26:54.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.591+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.116 2026-03-20T18:26:54.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.591+0000 7f7701ae2640 20 do_open: entering 2026-03-20T18:26:54.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.591+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.116 does not exist 2026-03-20T18:26:54.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.591+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.117 2026-03-20T18:26:54.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.591+0000 7f77012e1640 20 do_open: entering 2026-03-20T18:26:54.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.591+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.117 does not exist 2026-03-20T18:26:54.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.591+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.118 2026-03-20T18:26:54.592 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.591+0000 7f7700ae0640 20 do_open: entering 2026-03-20T18:26:54.593 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.591+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.118 does not exist 2026-03-20T18:26:54.593 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.591+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.119 2026-03-20T18:26:54.593 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.591+0000 7f7704d6f640 20 do_open: entering 2026-03-20T18:26:54.593 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.591+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.119 does not exist 2026-03-20T18:26:54.593 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.591+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.120 2026-03-20T18:26:54.593 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.591+0000 7f76ebfff640 20 do_open: entering 2026-03-20T18:26:54.593 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.592+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.120 does not exist 2026-03-20T18:26:54.593 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.592+0000 7f76eb7fe640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.121 2026-03-20T18:26:54.593 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.592+0000 7f76eb7fe640 20 do_open: entering 2026-03-20T18:26:54.593 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.592+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.121 does not exist 2026-03-20T18:26:54.593 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.592+0000 7f76eaffd640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.122 2026-03-20T18:26:54.593 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.592+0000 7f76eaffd640 20 do_open: entering 2026-03-20T18:26:54.593 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.592+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.122 does not exist 2026-03-20T18:26:54.594 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.592+0000 7f7703ae6640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.123 2026-03-20T18:26:54.594 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.592+0000 7f7703ae6640 20 do_open: entering 2026-03-20T18:26:54.594 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.592+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.123 does not exist 2026-03-20T18:26:54.594 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.592+0000 7f7701ae2640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.124 2026-03-20T18:26:54.594 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.592+0000 7f7701ae2640 20 do_open: entering 2026-03-20T18:26:54.594 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.592+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.124 does not exist 2026-03-20T18:26:54.594 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.592+0000 7f77012e1640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.125 2026-03-20T18:26:54.594 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.592+0000 7f77012e1640 20 do_open: entering 2026-03-20T18:26:54.594 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.593+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.125 does not exist 2026-03-20T18:26:54.594 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.593+0000 7f7700ae0640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.126 2026-03-20T18:26:54.594 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.593+0000 7f7700ae0640 20 do_open: entering 2026-03-20T18:26:54.594 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.593+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.126 does not exist 2026-03-20T18:26:54.594 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.593+0000 7f7704d6f640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):59 probing obj=data_log.127 2026-03-20T18:26:54.595 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.593+0000 7f7704d6f640 20 do_open: entering 2026-03-20T18:26:54.595 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.593+0000 7f76ebfff640 20 boost::asio::awaitable {anonymous}::probe_shard(const DoutPrefixProvider*, neorados::RADOS, const neorados::Object&, const neorados::IOContext&, bool&):78: obj=data_log.127 does not exist 2026-03-20T18:26:54.595 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.593+0000 7f76ebfff640 20 do_create: entering 2026-03-20T18:26:54.596 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.595+0000 7f76eb7fe640 20 do_open: entering 2026-03-20T18:26:54.600 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.598+0000 7f7706d84900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T18:26:54.600 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:54.598+0000 7f7706d84900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T18:26:57.308 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.306+0000 7f7706d84900 10 rgw_init_ioctx warning: failed to set recovery_priority on default.rgw.meta 2026-03-20T18:26:57.308 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.306+0000 7f7706d84900 5 note: GC not initialized 2026-03-20T18:26:57.308 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.306+0000 7f76ad7e2640 20 reqs_thread_entry: start 2026-03-20T18:26:57.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.412+0000 7f7706d84900 20 init_complete bucket index max shards: 11 2026-03-20T18:26:57.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.412+0000 7f7706d84900 20 Filter name: none 2026-03-20T18:26:57.414 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.412+0000 7f7666ffd640 20 reqs_thread_entry: start 2026-03-20T18:26:57.427 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.425+0000 7f7706d84900 20 remove_watcher() i=0 2026-03-20T18:26:57.427 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.425+0000 7f7706d84900 2 removed watcher, disabling cache 2026-03-20T18:26:57.427 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.425+0000 7f7706d84900 20 remove_watcher() i=2 2026-03-20T18:26:57.427 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.425+0000 7f7706d84900 20 remove_watcher() i=1 2026-03-20T18:26:57.427 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.425+0000 7f7706d84900 20 remove_watcher() i=5 2026-03-20T18:26:57.427 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.425+0000 7f7706d84900 20 remove_watcher() i=6 2026-03-20T18:26:57.427 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.426+0000 7f7706d84900 20 remove_watcher() i=3 2026-03-20T18:26:57.427 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.426+0000 7f7706d84900 20 remove_watcher() i=7 2026-03-20T18:26:57.427 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.426+0000 7f7706d84900 20 remove_watcher() i=4 2026-03-20T18:26:57.434 INFO:teuthology.orchestra.run.vm00.stdout:[] 2026-03-20T18:26:57.434 DEBUG:tasks.util.rgw: json result: [] 2026-03-20T18:26:57.434 INFO:tasks.rgw:Configuring storage class = FROZEN 2026-03-20T18:26:57.434 INFO:tasks.util.rgw:rgwadmin: client.0 : ['zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN'] 2026-03-20T18:26:57.434 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.0', '--cluster', 'ceph', 'zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN'] 2026-03-20T18:26:57.434 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph zonegroup placement add --rgw-zone default --placement-id default-placement --storage-class FROZEN 2026-03-20T18:26:57.520 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-03-20T18:26:57.520 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T18:26:57.535 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.533+0000 7f73ac013900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T18:26:57.535 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.533+0000 7f73ac013900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T18:26:57.535 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.533+0000 7f73557e2640 20 reqs_thread_entry: start 2026-03-20T18:26:57.545 INFO:teuthology.orchestra.run.vm00.stdout:[{"key":"default-placement","val":{"name":"default-placement","tags":[],"storage_classes":["FROZEN","STANDARD"]}}] 2026-03-20T18:26:57.545 DEBUG:tasks.util.rgw: json result: [{'key': 'default-placement', 'val': {'name': 'default-placement', 'tags': [], 'storage_classes': ['FROZEN', 'STANDARD']}}] 2026-03-20T18:26:57.545 INFO:tasks.util.rgw:rgwadmin: client.0 : ['zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN', '--data-pool', 'default.rgw.buckets.data.frozen'] 2026-03-20T18:26:57.545 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.0', '--cluster', 'ceph', 'zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN', '--data-pool', 'default.rgw.buckets.data.frozen'] 2026-03-20T18:26:57.545 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph zone placement add --rgw-zone default --placement-id default-placement --storage-class FROZEN --data-pool default.rgw.buckets.data.frozen 2026-03-20T18:26:57.586 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-03-20T18:26:57.586 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T18:26:57.601 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.599+0000 7fc73a51f900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T18:26:57.601 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.599+0000 7fc73a51f900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T18:26:57.602 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.600+0000 7fc6e4fe1640 20 reqs_thread_entry: start 2026-03-20T18:26:57.611 INFO:teuthology.orchestra.run.vm00.stdout:{"id":"06c127a6-ead2-4613-b3e9-3a45595f4b52","name":"default","domain_root":"default.rgw.meta:root","control_pool":"default.rgw.control","dedup_pool":"default.rgw.dedup","gc_pool":"default.rgw.log:gc","lc_pool":"default.rgw.log:lc","log_pool":"default.rgw.log","intent_log_pool":"default.rgw.log:intent","usage_log_pool":"default.rgw.log:usage","roles_pool":"default.rgw.meta:roles","reshard_pool":"default.rgw.log:reshard","user_keys_pool":"default.rgw.meta:users.keys","user_email_pool":"default.rgw.meta:users.email","user_swift_pool":"default.rgw.meta:users.swift","user_uid_pool":"default.rgw.meta:users.uid","otp_pool":"default.rgw.otp","notif_pool":"default.rgw.log:notif","topics_pool":"default.rgw.meta:topics","account_pool":"default.rgw.meta:accounts","group_pool":"default.rgw.meta:groups","system_key":{"access_key":"","secret_key":""},"placement_pools":[{"key":"default-placement","val":{"index_pool":"default.rgw.buckets.index","storage_classes":{"FROZEN":{"data_pool":"default.rgw.buckets.data.frozen"},"STANDARD":{"data_pool":"default.rgw.buckets.data"}},"data_extra_pool":"default.rgw.buckets.non-ec","index_type":0,"inline_data":true}}],"realm_id":"","restore_pool":"default.rgw.log:restore"} 2026-03-20T18:26:57.611 DEBUG:tasks.util.rgw: json result: {'id': '06c127a6-ead2-4613-b3e9-3a45595f4b52', 'name': 'default', 'domain_root': 'default.rgw.meta:root', 'control_pool': 'default.rgw.control', 'dedup_pool': 'default.rgw.dedup', 'gc_pool': 'default.rgw.log:gc', 'lc_pool': 'default.rgw.log:lc', 'log_pool': 'default.rgw.log', 'intent_log_pool': 'default.rgw.log:intent', 'usage_log_pool': 'default.rgw.log:usage', 'roles_pool': 'default.rgw.meta:roles', 'reshard_pool': 'default.rgw.log:reshard', 'user_keys_pool': 'default.rgw.meta:users.keys', 'user_email_pool': 'default.rgw.meta:users.email', 'user_swift_pool': 'default.rgw.meta:users.swift', 'user_uid_pool': 'default.rgw.meta:users.uid', 'otp_pool': 'default.rgw.otp', 'notif_pool': 'default.rgw.log:notif', 'topics_pool': 'default.rgw.meta:topics', 'account_pool': 'default.rgw.meta:accounts', 'group_pool': 'default.rgw.meta:groups', 'system_key': {'access_key': '', 'secret_key': ''}, 'placement_pools': [{'key': 'default-placement', 'val': {'index_pool': 'default.rgw.buckets.index', 'storage_classes': {'FROZEN': {'data_pool': 'default.rgw.buckets.data.frozen'}, 'STANDARD': {'data_pool': 'default.rgw.buckets.data'}}, 'data_extra_pool': 'default.rgw.buckets.non-ec', 'index_type': 0, 'inline_data': True}}], 'realm_id': '', 'restore_pool': 'default.rgw.log:restore'} 2026-03-20T18:26:57.611 INFO:tasks.rgw:Configuring storage class = LUKEWARM 2026-03-20T18:26:57.611 INFO:tasks.util.rgw:rgwadmin: client.0 : ['zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM'] 2026-03-20T18:26:57.611 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.0', '--cluster', 'ceph', 'zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM'] 2026-03-20T18:26:57.611 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph zonegroup placement add --rgw-zone default --placement-id default-placement --storage-class LUKEWARM 2026-03-20T18:26:57.691 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-03-20T18:26:57.691 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T18:26:57.707 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.705+0000 7f2992b1f900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T18:26:57.707 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.705+0000 7f2992b1f900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T18:26:57.707 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.705+0000 7f293cfe1640 20 reqs_thread_entry: start 2026-03-20T18:26:57.717 INFO:teuthology.orchestra.run.vm00.stdout:[{"key":"default-placement","val":{"name":"default-placement","tags":[],"storage_classes":["FROZEN","LUKEWARM","STANDARD"]}}] 2026-03-20T18:26:57.717 DEBUG:tasks.util.rgw: json result: [{'key': 'default-placement', 'val': {'name': 'default-placement', 'tags': [], 'storage_classes': ['FROZEN', 'LUKEWARM', 'STANDARD']}}] 2026-03-20T18:26:57.717 INFO:tasks.util.rgw:rgwadmin: client.0 : ['zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM', '--data-pool', 'default.rgw.buckets.data.lukewarm'] 2026-03-20T18:26:57.717 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.0', '--cluster', 'ceph', 'zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM', '--data-pool', 'default.rgw.buckets.data.lukewarm'] 2026-03-20T18:26:57.717 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph zone placement add --rgw-zone default --placement-id default-placement --storage-class LUKEWARM --data-pool default.rgw.buckets.data.lukewarm 2026-03-20T18:26:57.799 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-03-20T18:26:57.799 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T18:26:57.815 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.814+0000 7f92fdd28900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T18:26:57.815 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.814+0000 7f92fdd28900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T18:26:57.815 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:26:57.814+0000 7f92a37fe640 20 reqs_thread_entry: start 2026-03-20T18:26:57.826 INFO:teuthology.orchestra.run.vm00.stdout:{"id":"06c127a6-ead2-4613-b3e9-3a45595f4b52","name":"default","domain_root":"default.rgw.meta:root","control_pool":"default.rgw.control","dedup_pool":"default.rgw.dedup","gc_pool":"default.rgw.log:gc","lc_pool":"default.rgw.log:lc","log_pool":"default.rgw.log","intent_log_pool":"default.rgw.log:intent","usage_log_pool":"default.rgw.log:usage","roles_pool":"default.rgw.meta:roles","reshard_pool":"default.rgw.log:reshard","user_keys_pool":"default.rgw.meta:users.keys","user_email_pool":"default.rgw.meta:users.email","user_swift_pool":"default.rgw.meta:users.swift","user_uid_pool":"default.rgw.meta:users.uid","otp_pool":"default.rgw.otp","notif_pool":"default.rgw.log:notif","topics_pool":"default.rgw.meta:topics","account_pool":"default.rgw.meta:accounts","group_pool":"default.rgw.meta:groups","system_key":{"access_key":"","secret_key":""},"placement_pools":[{"key":"default-placement","val":{"index_pool":"default.rgw.buckets.index","storage_classes":{"FROZEN":{"data_pool":"default.rgw.buckets.data.frozen"},"LUKEWARM":{"data_pool":"default.rgw.buckets.data.lukewarm"},"STANDARD":{"data_pool":"default.rgw.buckets.data"}},"data_extra_pool":"default.rgw.buckets.non-ec","index_type":0,"inline_data":true}}],"realm_id":"","restore_pool":"default.rgw.log:restore"} 2026-03-20T18:26:57.826 DEBUG:tasks.util.rgw: json result: {'id': '06c127a6-ead2-4613-b3e9-3a45595f4b52', 'name': 'default', 'domain_root': 'default.rgw.meta:root', 'control_pool': 'default.rgw.control', 'dedup_pool': 'default.rgw.dedup', 'gc_pool': 'default.rgw.log:gc', 'lc_pool': 'default.rgw.log:lc', 'log_pool': 'default.rgw.log', 'intent_log_pool': 'default.rgw.log:intent', 'usage_log_pool': 'default.rgw.log:usage', 'roles_pool': 'default.rgw.meta:roles', 'reshard_pool': 'default.rgw.log:reshard', 'user_keys_pool': 'default.rgw.meta:users.keys', 'user_email_pool': 'default.rgw.meta:users.email', 'user_swift_pool': 'default.rgw.meta:users.swift', 'user_uid_pool': 'default.rgw.meta:users.uid', 'otp_pool': 'default.rgw.otp', 'notif_pool': 'default.rgw.log:notif', 'topics_pool': 'default.rgw.meta:topics', 'account_pool': 'default.rgw.meta:accounts', 'group_pool': 'default.rgw.meta:groups', 'system_key': {'access_key': '', 'secret_key': ''}, 'placement_pools': [{'key': 'default-placement', 'val': {'index_pool': 'default.rgw.buckets.index', 'storage_classes': {'FROZEN': {'data_pool': 'default.rgw.buckets.data.frozen'}, 'LUKEWARM': {'data_pool': 'default.rgw.buckets.data.lukewarm'}, 'STANDARD': {'data_pool': 'default.rgw.buckets.data'}}, 'data_extra_pool': 'default.rgw.buckets.non-ec', 'index_type': 0, 'inline_data': True}}], 'realm_id': '', 'restore_pool': 'default.rgw.log:restore'} 2026-03-20T18:26:57.826 INFO:tasks.util.rgw:rgwadmin: client.1 : ['user', 'list'] 2026-03-20T18:26:57.826 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.1', '--cluster', 'ceph', 'user', 'list'] 2026-03-20T18:26:57.826 DEBUG:teuthology.orchestra.run.vm02:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.1 --cluster ceph user list 2026-03-20T18:26:57.863 INFO:teuthology.orchestra.run.vm02.stderr:ignoring --setuser ceph since I am not root 2026-03-20T18:26:57.864 INFO:teuthology.orchestra.run.vm02.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T18:26:57.882 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.881+0000 7f93e0d93900 20 rados->read ofs=0 len=0 2026-03-20T18:26:57.883 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.883+0000 7f93e0d93900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:57.883 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.883+0000 7f93e0d93900 20 realm 2026-03-20T18:26:57.883 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.883+0000 7f93e0d93900 20 rados->read ofs=0 len=0 2026-03-20T18:26:57.883 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.883+0000 7f93e0d93900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:57.884 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.883+0000 7f93e0d93900 4 RGWPeriod::init failed to init realm id : (2) No such file or directory 2026-03-20T18:26:57.884 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.883+0000 7f93e0d93900 20 rados->read ofs=0 len=0 2026-03-20T18:26:57.884 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.883+0000 7f93e0d93900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:57.884 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.883+0000 7f93e0d93900 20 rados->read ofs=0 len=0 2026-03-20T18:26:57.884 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.884+0000 7f93e0d93900 20 rados_obj.operate() r=0 bl.length=46 2026-03-20T18:26:57.884 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.884+0000 7f93e0d93900 20 rados->read ofs=0 len=0 2026-03-20T18:26:57.884 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.884+0000 7f93e0d93900 20 rados_obj.operate() r=0 bl.length=1190 2026-03-20T18:26:57.884 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.884+0000 7f93e0d93900 20 searching for the correct realm 2026-03-20T18:26:57.895 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.895+0000 7f93e0d93900 20 RGWRados::pool_iterate: got default.zonegroup. 2026-03-20T18:26:57.895 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.895+0000 7f93e0d93900 20 RGWRados::pool_iterate: got zonegroup_info.1f41abd1-1863-43a5-a3b4-57a42098bd37 2026-03-20T18:26:57.895 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.895+0000 7f93e0d93900 20 RGWRados::pool_iterate: got zone_info.06c127a6-ead2-4613-b3e9-3a45595f4b52 2026-03-20T18:26:57.895 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.895+0000 7f93e0d93900 20 RGWRados::pool_iterate: got default.zone. 2026-03-20T18:26:57.895 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.895+0000 7f93e0d93900 20 RGWRados::pool_iterate: got zone_names.default 2026-03-20T18:26:57.895 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.895+0000 7f93e0d93900 20 RGWRados::pool_iterate: got zonegroups_names.default 2026-03-20T18:26:57.895 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.895+0000 7f93e0d93900 20 rados->read ofs=0 len=0 2026-03-20T18:26:57.895 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.895+0000 7f93e0d93900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:57.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.895+0000 7f93e0d93900 20 rados->read ofs=0 len=0 2026-03-20T18:26:57.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.895+0000 7f93e0d93900 20 rados_obj.operate() r=0 bl.length=46 2026-03-20T18:26:57.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.895+0000 7f93e0d93900 20 rados->read ofs=0 len=0 2026-03-20T18:26:57.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.895+0000 7f93e0d93900 20 rados_obj.operate() r=0 bl.length=470 2026-03-20T18:26:57.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.895+0000 7f93e0d93900 20 zone default found 2026-03-20T18:26:57.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.895+0000 7f93e0d93900 4 Realm: () 2026-03-20T18:26:57.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.895+0000 7f93e0d93900 4 ZoneGroup: default (1f41abd1-1863-43a5-a3b4-57a42098bd37) 2026-03-20T18:26:57.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.895+0000 7f93e0d93900 4 Zone: default (06c127a6-ead2-4613-b3e9-3a45595f4b52) 2026-03-20T18:26:57.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.895+0000 7f93e0d93900 10 cannot find current period zonegroup using local zonegroup configuration 2026-03-20T18:26:57.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.895+0000 7f93e0d93900 20 zonegroup default 2026-03-20T18:26:57.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.895+0000 7f93e0d93900 20 rados->read ofs=0 len=0 2026-03-20T18:26:57.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.896+0000 7f93e0d93900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:57.896 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.896+0000 7f93e0d93900 20 rados->read ofs=0 len=0 2026-03-20T18:26:57.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.896+0000 7f93e0d93900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:57.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.896+0000 7f93e0d93900 20 rados->read ofs=0 len=0 2026-03-20T18:26:57.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.896+0000 7f93e0d93900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:57.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.896+0000 7f93e0d93900 20 started sync module instance, tier type = 2026-03-20T18:26:57.897 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.896+0000 7f93e0d93900 20 started zone id=06c127a6-ead2-4613-b3e9-3a45595f4b52 (name=default) with tier type = 2026-03-20T18:26:57.900 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.900+0000 7f93e0d93900 20 add_watcher() i=0 2026-03-20T18:26:57.901 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.901+0000 7f93e0d93900 20 add_watcher() i=4 2026-03-20T18:26:57.902 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.902+0000 7f93e0d93900 20 add_watcher() i=1 2026-03-20T18:26:57.902 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.902+0000 7f93e0d93900 20 add_watcher() i=2 2026-03-20T18:26:57.902 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.902+0000 7f93e0d93900 20 add_watcher() i=7 2026-03-20T18:26:57.903 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.902+0000 7f93e0d93900 20 add_watcher() i=3 2026-03-20T18:26:57.903 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.903+0000 7f93e0d93900 20 add_watcher() i=6 2026-03-20T18:26:57.903 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.903+0000 7f93e0d93900 20 add_watcher() i=5 2026-03-20T18:26:57.903 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.903+0000 7f93e0d93900 2 all 8 watchers are set, enabling cache 2026-03-20T18:26:57.906 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.905+0000 7f93e0d93900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T18:26:57.906 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.905+0000 7f93e0d93900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T18:26:57.906 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.905+0000 7f93e0d93900 5 note: GC not initialized 2026-03-20T18:26:57.906 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.906+0000 7f938a7e4640 20 reqs_thread_entry: start 2026-03-20T18:26:57.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.951+0000 7f93e0d93900 20 init_complete bucket index max shards: 11 2026-03-20T18:26:57.951 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.951+0000 7f93e0d93900 20 Filter name: none 2026-03-20T18:26:57.952 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.951+0000 7f933bfff640 20 reqs_thread_entry: start 2026-03-20T18:26:57.962 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.961+0000 7f93e0d93900 20 remove_watcher() i=2 2026-03-20T18:26:57.962 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.961+0000 7f93e0d93900 2 removed watcher, disabling cache 2026-03-20T18:26:57.962 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.961+0000 7f93e0d93900 20 remove_watcher() i=1 2026-03-20T18:26:57.962 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.961+0000 7f93e0d93900 20 remove_watcher() i=0 2026-03-20T18:26:57.962 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.962+0000 7f93e0d93900 20 remove_watcher() i=3 2026-03-20T18:26:57.963 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.962+0000 7f93e0d93900 20 remove_watcher() i=4 2026-03-20T18:26:57.963 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.963+0000 7f93e0d93900 20 remove_watcher() i=7 2026-03-20T18:26:57.964 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.963+0000 7f93e0d93900 20 remove_watcher() i=5 2026-03-20T18:26:57.964 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:57.964+0000 7f93e0d93900 20 remove_watcher() i=6 2026-03-20T18:26:57.971 INFO:teuthology.orchestra.run.vm02.stdout:[] 2026-03-20T18:26:57.971 DEBUG:tasks.util.rgw: json result: [] 2026-03-20T18:26:57.971 INFO:tasks.rgw:Configuring storage class = FROZEN 2026-03-20T18:26:57.971 INFO:tasks.util.rgw:rgwadmin: client.1 : ['zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN'] 2026-03-20T18:26:57.971 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.1', '--cluster', 'ceph', 'zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN'] 2026-03-20T18:26:57.972 DEBUG:teuthology.orchestra.run.vm02:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.1 --cluster ceph zonegroup placement add --rgw-zone default --placement-id default-placement --storage-class FROZEN 2026-03-20T18:26:58.054 INFO:teuthology.orchestra.run.vm02.stderr:ignoring --setuser ceph since I am not root 2026-03-20T18:26:58.054 INFO:teuthology.orchestra.run.vm02.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T18:26:58.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:58.068+0000 7f954c730900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T18:26:58.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:58.068+0000 7f954c730900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T18:26:58.069 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:58.068+0000 7f94f67e4640 20 reqs_thread_entry: start 2026-03-20T18:26:58.079 INFO:teuthology.orchestra.run.vm02.stdout:[{"key":"default-placement","val":{"name":"default-placement","tags":[],"storage_classes":["FROZEN","LUKEWARM","STANDARD"]}}] 2026-03-20T18:26:58.079 DEBUG:tasks.util.rgw: json result: [{'key': 'default-placement', 'val': {'name': 'default-placement', 'tags': [], 'storage_classes': ['FROZEN', 'LUKEWARM', 'STANDARD']}}] 2026-03-20T18:26:58.079 INFO:tasks.util.rgw:rgwadmin: client.1 : ['zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN', '--data-pool', 'default.rgw.buckets.data.frozen'] 2026-03-20T18:26:58.079 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.1', '--cluster', 'ceph', 'zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN', '--data-pool', 'default.rgw.buckets.data.frozen'] 2026-03-20T18:26:58.079 DEBUG:teuthology.orchestra.run.vm02:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.1 --cluster ceph zone placement add --rgw-zone default --placement-id default-placement --storage-class FROZEN --data-pool default.rgw.buckets.data.frozen 2026-03-20T18:26:58.157 INFO:teuthology.orchestra.run.vm02.stderr:ignoring --setuser ceph since I am not root 2026-03-20T18:26:58.158 INFO:teuthology.orchestra.run.vm02.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T18:26:58.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:58.170+0000 7fe19913d900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T18:26:58.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:58.170+0000 7fe19913d900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T18:26:58.171 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:58.170+0000 7fe142fe5640 20 reqs_thread_entry: start 2026-03-20T18:26:58.180 INFO:teuthology.orchestra.run.vm02.stdout:{"id":"06c127a6-ead2-4613-b3e9-3a45595f4b52","name":"default","domain_root":"default.rgw.meta:root","control_pool":"default.rgw.control","dedup_pool":"default.rgw.dedup","gc_pool":"default.rgw.log:gc","lc_pool":"default.rgw.log:lc","log_pool":"default.rgw.log","intent_log_pool":"default.rgw.log:intent","usage_log_pool":"default.rgw.log:usage","roles_pool":"default.rgw.meta:roles","reshard_pool":"default.rgw.log:reshard","user_keys_pool":"default.rgw.meta:users.keys","user_email_pool":"default.rgw.meta:users.email","user_swift_pool":"default.rgw.meta:users.swift","user_uid_pool":"default.rgw.meta:users.uid","otp_pool":"default.rgw.otp","notif_pool":"default.rgw.log:notif","topics_pool":"default.rgw.meta:topics","account_pool":"default.rgw.meta:accounts","group_pool":"default.rgw.meta:groups","system_key":{"access_key":"","secret_key":""},"placement_pools":[{"key":"default-placement","val":{"index_pool":"default.rgw.buckets.index","storage_classes":{"FROZEN":{"data_pool":"default.rgw.buckets.data.frozen"},"LUKEWARM":{"data_pool":"default.rgw.buckets.data.lukewarm"},"STANDARD":{"data_pool":"default.rgw.buckets.data"}},"data_extra_pool":"default.rgw.buckets.non-ec","index_type":0,"inline_data":true}}],"realm_id":"","restore_pool":"default.rgw.log:restore"} 2026-03-20T18:26:58.180 DEBUG:tasks.util.rgw: json result: {'id': '06c127a6-ead2-4613-b3e9-3a45595f4b52', 'name': 'default', 'domain_root': 'default.rgw.meta:root', 'control_pool': 'default.rgw.control', 'dedup_pool': 'default.rgw.dedup', 'gc_pool': 'default.rgw.log:gc', 'lc_pool': 'default.rgw.log:lc', 'log_pool': 'default.rgw.log', 'intent_log_pool': 'default.rgw.log:intent', 'usage_log_pool': 'default.rgw.log:usage', 'roles_pool': 'default.rgw.meta:roles', 'reshard_pool': 'default.rgw.log:reshard', 'user_keys_pool': 'default.rgw.meta:users.keys', 'user_email_pool': 'default.rgw.meta:users.email', 'user_swift_pool': 'default.rgw.meta:users.swift', 'user_uid_pool': 'default.rgw.meta:users.uid', 'otp_pool': 'default.rgw.otp', 'notif_pool': 'default.rgw.log:notif', 'topics_pool': 'default.rgw.meta:topics', 'account_pool': 'default.rgw.meta:accounts', 'group_pool': 'default.rgw.meta:groups', 'system_key': {'access_key': '', 'secret_key': ''}, 'placement_pools': [{'key': 'default-placement', 'val': {'index_pool': 'default.rgw.buckets.index', 'storage_classes': {'FROZEN': {'data_pool': 'default.rgw.buckets.data.frozen'}, 'LUKEWARM': {'data_pool': 'default.rgw.buckets.data.lukewarm'}, 'STANDARD': {'data_pool': 'default.rgw.buckets.data'}}, 'data_extra_pool': 'default.rgw.buckets.non-ec', 'index_type': 0, 'inline_data': True}}], 'realm_id': '', 'restore_pool': 'default.rgw.log:restore'} 2026-03-20T18:26:58.180 INFO:tasks.rgw:Configuring storage class = LUKEWARM 2026-03-20T18:26:58.180 INFO:tasks.util.rgw:rgwadmin: client.1 : ['zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM'] 2026-03-20T18:26:58.180 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.1', '--cluster', 'ceph', 'zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM'] 2026-03-20T18:26:58.181 DEBUG:teuthology.orchestra.run.vm02:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.1 --cluster ceph zonegroup placement add --rgw-zone default --placement-id default-placement --storage-class LUKEWARM 2026-03-20T18:26:58.266 INFO:teuthology.orchestra.run.vm02.stderr:ignoring --setuser ceph since I am not root 2026-03-20T18:26:58.266 INFO:teuthology.orchestra.run.vm02.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T18:26:58.280 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:58.279+0000 7fb6f4f36900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T18:26:58.280 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:58.279+0000 7fb6f4f36900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T18:26:58.280 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:58.280+0000 7fb69efe5640 20 reqs_thread_entry: start 2026-03-20T18:26:58.290 INFO:teuthology.orchestra.run.vm02.stdout:[{"key":"default-placement","val":{"name":"default-placement","tags":[],"storage_classes":["FROZEN","LUKEWARM","STANDARD"]}}] 2026-03-20T18:26:58.290 DEBUG:tasks.util.rgw: json result: [{'key': 'default-placement', 'val': {'name': 'default-placement', 'tags': [], 'storage_classes': ['FROZEN', 'LUKEWARM', 'STANDARD']}}] 2026-03-20T18:26:58.290 INFO:tasks.util.rgw:rgwadmin: client.1 : ['zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM', '--data-pool', 'default.rgw.buckets.data.lukewarm'] 2026-03-20T18:26:58.290 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.1', '--cluster', 'ceph', 'zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM', '--data-pool', 'default.rgw.buckets.data.lukewarm'] 2026-03-20T18:26:58.290 DEBUG:teuthology.orchestra.run.vm02:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.1 --cluster ceph zone placement add --rgw-zone default --placement-id default-placement --storage-class LUKEWARM --data-pool default.rgw.buckets.data.lukewarm 2026-03-20T18:26:58.333 INFO:teuthology.orchestra.run.vm02.stderr:ignoring --setuser ceph since I am not root 2026-03-20T18:26:58.333 INFO:teuthology.orchestra.run.vm02.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T18:26:58.347 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:58.346+0000 7f245bc22900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T18:26:58.347 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:58.346+0000 7f245bc22900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T18:26:58.347 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-20T18:26:58.347+0000 7f24057e2640 20 reqs_thread_entry: start 2026-03-20T18:26:58.356 INFO:teuthology.orchestra.run.vm02.stdout:{"id":"06c127a6-ead2-4613-b3e9-3a45595f4b52","name":"default","domain_root":"default.rgw.meta:root","control_pool":"default.rgw.control","dedup_pool":"default.rgw.dedup","gc_pool":"default.rgw.log:gc","lc_pool":"default.rgw.log:lc","log_pool":"default.rgw.log","intent_log_pool":"default.rgw.log:intent","usage_log_pool":"default.rgw.log:usage","roles_pool":"default.rgw.meta:roles","reshard_pool":"default.rgw.log:reshard","user_keys_pool":"default.rgw.meta:users.keys","user_email_pool":"default.rgw.meta:users.email","user_swift_pool":"default.rgw.meta:users.swift","user_uid_pool":"default.rgw.meta:users.uid","otp_pool":"default.rgw.otp","notif_pool":"default.rgw.log:notif","topics_pool":"default.rgw.meta:topics","account_pool":"default.rgw.meta:accounts","group_pool":"default.rgw.meta:groups","system_key":{"access_key":"","secret_key":""},"placement_pools":[{"key":"default-placement","val":{"index_pool":"default.rgw.buckets.index","storage_classes":{"FROZEN":{"data_pool":"default.rgw.buckets.data.frozen"},"LUKEWARM":{"data_pool":"default.rgw.buckets.data.lukewarm"},"STANDARD":{"data_pool":"default.rgw.buckets.data"}},"data_extra_pool":"default.rgw.buckets.non-ec","index_type":0,"inline_data":true}}],"realm_id":"","restore_pool":"default.rgw.log:restore"} 2026-03-20T18:26:58.357 DEBUG:tasks.util.rgw: json result: {'id': '06c127a6-ead2-4613-b3e9-3a45595f4b52', 'name': 'default', 'domain_root': 'default.rgw.meta:root', 'control_pool': 'default.rgw.control', 'dedup_pool': 'default.rgw.dedup', 'gc_pool': 'default.rgw.log:gc', 'lc_pool': 'default.rgw.log:lc', 'log_pool': 'default.rgw.log', 'intent_log_pool': 'default.rgw.log:intent', 'usage_log_pool': 'default.rgw.log:usage', 'roles_pool': 'default.rgw.meta:roles', 'reshard_pool': 'default.rgw.log:reshard', 'user_keys_pool': 'default.rgw.meta:users.keys', 'user_email_pool': 'default.rgw.meta:users.email', 'user_swift_pool': 'default.rgw.meta:users.swift', 'user_uid_pool': 'default.rgw.meta:users.uid', 'otp_pool': 'default.rgw.otp', 'notif_pool': 'default.rgw.log:notif', 'topics_pool': 'default.rgw.meta:topics', 'account_pool': 'default.rgw.meta:accounts', 'group_pool': 'default.rgw.meta:groups', 'system_key': {'access_key': '', 'secret_key': ''}, 'placement_pools': [{'key': 'default-placement', 'val': {'index_pool': 'default.rgw.buckets.index', 'storage_classes': {'FROZEN': {'data_pool': 'default.rgw.buckets.data.frozen'}, 'LUKEWARM': {'data_pool': 'default.rgw.buckets.data.lukewarm'}, 'STANDARD': {'data_pool': 'default.rgw.buckets.data'}}, 'data_extra_pool': 'default.rgw.buckets.non-ec', 'index_type': 0, 'inline_data': True}}], 'realm_id': '', 'restore_pool': 'default.rgw.log:restore'} 2026-03-20T18:26:58.357 INFO:tasks.util.rgw:rgwadmin: client.2 : ['user', 'list'] 2026-03-20T18:26:58.357 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.2', '--cluster', 'ceph', 'user', 'list'] 2026-03-20T18:26:58.357 DEBUG:teuthology.orchestra.run.vm05:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.2 --cluster ceph user list 2026-03-20T18:26:58.394 INFO:teuthology.orchestra.run.vm05.stderr:ignoring --setuser ceph since I am not root 2026-03-20T18:26:58.394 INFO:teuthology.orchestra.run.vm05.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T18:26:58.424 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.422+0000 7ff0d651f900 20 rados->read ofs=0 len=0 2026-03-20T18:26:58.426 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.424+0000 7ff0d651f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:58.426 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.424+0000 7ff0d651f900 20 realm 2026-03-20T18:26:58.426 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.424+0000 7ff0d651f900 20 rados->read ofs=0 len=0 2026-03-20T18:26:58.426 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.424+0000 7ff0d651f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:58.426 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.424+0000 7ff0d651f900 4 RGWPeriod::init failed to init realm id : (2) No such file or directory 2026-03-20T18:26:58.426 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.424+0000 7ff0d651f900 20 rados->read ofs=0 len=0 2026-03-20T18:26:58.426 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.424+0000 7ff0d651f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:58.426 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.424+0000 7ff0d651f900 20 rados->read ofs=0 len=0 2026-03-20T18:26:58.427 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.425+0000 7ff0d651f900 20 rados_obj.operate() r=0 bl.length=46 2026-03-20T18:26:58.427 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.425+0000 7ff0d651f900 20 rados->read ofs=0 len=0 2026-03-20T18:26:58.427 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.426+0000 7ff0d651f900 20 rados_obj.operate() r=0 bl.length=1190 2026-03-20T18:26:58.427 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.426+0000 7ff0d651f900 20 searching for the correct realm 2026-03-20T18:26:58.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.437+0000 7ff0d651f900 20 RGWRados::pool_iterate: got default.zonegroup. 2026-03-20T18:26:58.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.437+0000 7ff0d651f900 20 RGWRados::pool_iterate: got zonegroup_info.1f41abd1-1863-43a5-a3b4-57a42098bd37 2026-03-20T18:26:58.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.437+0000 7ff0d651f900 20 RGWRados::pool_iterate: got zone_info.06c127a6-ead2-4613-b3e9-3a45595f4b52 2026-03-20T18:26:58.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.437+0000 7ff0d651f900 20 RGWRados::pool_iterate: got default.zone. 2026-03-20T18:26:58.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.437+0000 7ff0d651f900 20 RGWRados::pool_iterate: got zone_names.default 2026-03-20T18:26:58.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.438+0000 7ff0d651f900 20 RGWRados::pool_iterate: got zonegroups_names.default 2026-03-20T18:26:58.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.438+0000 7ff0d651f900 20 rados->read ofs=0 len=0 2026-03-20T18:26:58.440 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.438+0000 7ff0d651f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:58.440 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.438+0000 7ff0d651f900 20 rados->read ofs=0 len=0 2026-03-20T18:26:58.440 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.438+0000 7ff0d651f900 20 rados_obj.operate() r=0 bl.length=46 2026-03-20T18:26:58.440 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.438+0000 7ff0d651f900 20 rados->read ofs=0 len=0 2026-03-20T18:26:58.440 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.438+0000 7ff0d651f900 20 rados_obj.operate() r=0 bl.length=470 2026-03-20T18:26:58.440 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.438+0000 7ff0d651f900 20 zone default found 2026-03-20T18:26:58.440 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.438+0000 7ff0d651f900 4 Realm: () 2026-03-20T18:26:58.440 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.438+0000 7ff0d651f900 4 ZoneGroup: default (1f41abd1-1863-43a5-a3b4-57a42098bd37) 2026-03-20T18:26:58.440 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.438+0000 7ff0d651f900 4 Zone: default (06c127a6-ead2-4613-b3e9-3a45595f4b52) 2026-03-20T18:26:58.440 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.438+0000 7ff0d651f900 10 cannot find current period zonegroup using local zonegroup configuration 2026-03-20T18:26:58.440 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.438+0000 7ff0d651f900 20 zonegroup default 2026-03-20T18:26:58.440 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.438+0000 7ff0d651f900 20 rados->read ofs=0 len=0 2026-03-20T18:26:58.440 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.439+0000 7ff0d651f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:58.440 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.439+0000 7ff0d651f900 20 rados->read ofs=0 len=0 2026-03-20T18:26:58.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.439+0000 7ff0d651f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:58.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.439+0000 7ff0d651f900 20 rados->read ofs=0 len=0 2026-03-20T18:26:58.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.439+0000 7ff0d651f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:26:58.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.439+0000 7ff0d651f900 20 started sync module instance, tier type = 2026-03-20T18:26:58.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.439+0000 7ff0d651f900 20 started zone id=06c127a6-ead2-4613-b3e9-3a45595f4b52 (name=default) with tier type = 2026-03-20T18:26:58.445 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.443+0000 7ff0d651f900 20 add_watcher() i=0 2026-03-20T18:26:58.445 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.443+0000 7ff0d651f900 20 add_watcher() i=5 2026-03-20T18:26:58.445 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.444+0000 7ff0d651f900 20 add_watcher() i=6 2026-03-20T18:26:58.446 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.444+0000 7ff0d651f900 20 add_watcher() i=3 2026-03-20T18:26:58.446 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.444+0000 7ff0d651f900 20 add_watcher() i=1 2026-03-20T18:26:58.446 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.444+0000 7ff0d651f900 20 add_watcher() i=7 2026-03-20T18:26:58.446 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.444+0000 7ff0d651f900 20 add_watcher() i=2 2026-03-20T18:26:58.446 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.444+0000 7ff0d651f900 20 add_watcher() i=4 2026-03-20T18:26:58.446 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.444+0000 7ff0d651f900 2 all 8 watchers are set, enabling cache 2026-03-20T18:26:58.448 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.447+0000 7ff0d651f900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T18:26:58.448 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.447+0000 7ff0d651f900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T18:26:58.448 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.447+0000 7ff0d651f900 5 note: GC not initialized 2026-03-20T18:26:58.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.447+0000 7ff033fff640 20 reqs_thread_entry: start 2026-03-20T18:26:58.504 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.502+0000 7ff0d651f900 20 init_complete bucket index max shards: 11 2026-03-20T18:26:58.504 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.502+0000 7ff0d651f900 20 Filter name: none 2026-03-20T18:26:58.504 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.502+0000 7ff031ffb640 20 reqs_thread_entry: start 2026-03-20T18:26:58.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.515+0000 7ff0d651f900 20 remove_watcher() i=6 2026-03-20T18:26:58.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.515+0000 7ff0d651f900 2 removed watcher, disabling cache 2026-03-20T18:26:58.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.515+0000 7ff0d651f900 20 remove_watcher() i=0 2026-03-20T18:26:58.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.515+0000 7ff0d651f900 20 remove_watcher() i=1 2026-03-20T18:26:58.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.515+0000 7ff0d651f900 20 remove_watcher() i=5 2026-03-20T18:26:58.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.515+0000 7ff0d651f900 20 remove_watcher() i=2 2026-03-20T18:26:58.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.515+0000 7ff0d651f900 20 remove_watcher() i=3 2026-03-20T18:26:58.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.516+0000 7ff0d651f900 20 remove_watcher() i=7 2026-03-20T18:26:58.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.516+0000 7ff0d651f900 20 remove_watcher() i=4 2026-03-20T18:26:58.525 INFO:teuthology.orchestra.run.vm05.stdout:[] 2026-03-20T18:26:58.525 DEBUG:tasks.util.rgw: json result: [] 2026-03-20T18:26:58.525 INFO:tasks.rgw:Configuring storage class = FROZEN 2026-03-20T18:26:58.525 INFO:tasks.util.rgw:rgwadmin: client.2 : ['zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN'] 2026-03-20T18:26:58.525 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.2', '--cluster', 'ceph', 'zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN'] 2026-03-20T18:26:58.525 DEBUG:teuthology.orchestra.run.vm05:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.2 --cluster ceph zonegroup placement add --rgw-zone default --placement-id default-placement --storage-class FROZEN 2026-03-20T18:26:58.608 INFO:teuthology.orchestra.run.vm05.stderr:ignoring --setuser ceph since I am not root 2026-03-20T18:26:58.608 INFO:teuthology.orchestra.run.vm05.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T18:26:58.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.626+0000 7fe31b726900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T18:26:58.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.626+0000 7fe31b726900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T18:26:58.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.626+0000 7fe2c57e2640 20 reqs_thread_entry: start 2026-03-20T18:26:58.640 INFO:teuthology.orchestra.run.vm05.stdout:[{"key":"default-placement","val":{"name":"default-placement","tags":[],"storage_classes":["FROZEN","LUKEWARM","STANDARD"]}}] 2026-03-20T18:26:58.640 DEBUG:tasks.util.rgw: json result: [{'key': 'default-placement', 'val': {'name': 'default-placement', 'tags': [], 'storage_classes': ['FROZEN', 'LUKEWARM', 'STANDARD']}}] 2026-03-20T18:26:58.640 INFO:tasks.util.rgw:rgwadmin: client.2 : ['zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN', '--data-pool', 'default.rgw.buckets.data.frozen'] 2026-03-20T18:26:58.640 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.2', '--cluster', 'ceph', 'zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'FROZEN', '--data-pool', 'default.rgw.buckets.data.frozen'] 2026-03-20T18:26:58.640 DEBUG:teuthology.orchestra.run.vm05:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.2 --cluster ceph zone placement add --rgw-zone default --placement-id default-placement --storage-class FROZEN --data-pool default.rgw.buckets.data.frozen 2026-03-20T18:26:58.719 INFO:teuthology.orchestra.run.vm05.stderr:ignoring --setuser ceph since I am not root 2026-03-20T18:26:58.780 INFO:teuthology.orchestra.run.vm05.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T18:26:58.797 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.795+0000 7f674fb52900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T18:26:58.797 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.795+0000 7f674fb52900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T18:26:58.798 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.796+0000 7f66fb7e6640 20 reqs_thread_entry: start 2026-03-20T18:26:58.858 INFO:teuthology.orchestra.run.vm05.stdout:{"id":"06c127a6-ead2-4613-b3e9-3a45595f4b52","name":"default","domain_root":"default.rgw.meta:root","control_pool":"default.rgw.control","dedup_pool":"default.rgw.dedup","gc_pool":"default.rgw.log:gc","lc_pool":"default.rgw.log:lc","log_pool":"default.rgw.log","intent_log_pool":"default.rgw.log:intent","usage_log_pool":"default.rgw.log:usage","roles_pool":"default.rgw.meta:roles","reshard_pool":"default.rgw.log:reshard","user_keys_pool":"default.rgw.meta:users.keys","user_email_pool":"default.rgw.meta:users.email","user_swift_pool":"default.rgw.meta:users.swift","user_uid_pool":"default.rgw.meta:users.uid","otp_pool":"default.rgw.otp","notif_pool":"default.rgw.log:notif","topics_pool":"default.rgw.meta:topics","account_pool":"default.rgw.meta:accounts","group_pool":"default.rgw.meta:groups","system_key":{"access_key":"","secret_key":""},"placement_pools":[{"key":"default-placement","val":{"index_pool":"default.rgw.buckets.index","storage_classes":{"FROZEN":{"data_pool":"default.rgw.buckets.data.frozen"},"LUKEWARM":{"data_pool":"default.rgw.buckets.data.lukewarm"},"STANDARD":{"data_pool":"default.rgw.buckets.data"}},"data_extra_pool":"default.rgw.buckets.non-ec","index_type":0,"inline_data":true}}],"realm_id":"","restore_pool":"default.rgw.log:restore"} 2026-03-20T18:26:58.858 DEBUG:tasks.util.rgw: json result: {'id': '06c127a6-ead2-4613-b3e9-3a45595f4b52', 'name': 'default', 'domain_root': 'default.rgw.meta:root', 'control_pool': 'default.rgw.control', 'dedup_pool': 'default.rgw.dedup', 'gc_pool': 'default.rgw.log:gc', 'lc_pool': 'default.rgw.log:lc', 'log_pool': 'default.rgw.log', 'intent_log_pool': 'default.rgw.log:intent', 'usage_log_pool': 'default.rgw.log:usage', 'roles_pool': 'default.rgw.meta:roles', 'reshard_pool': 'default.rgw.log:reshard', 'user_keys_pool': 'default.rgw.meta:users.keys', 'user_email_pool': 'default.rgw.meta:users.email', 'user_swift_pool': 'default.rgw.meta:users.swift', 'user_uid_pool': 'default.rgw.meta:users.uid', 'otp_pool': 'default.rgw.otp', 'notif_pool': 'default.rgw.log:notif', 'topics_pool': 'default.rgw.meta:topics', 'account_pool': 'default.rgw.meta:accounts', 'group_pool': 'default.rgw.meta:groups', 'system_key': {'access_key': '', 'secret_key': ''}, 'placement_pools': [{'key': 'default-placement', 'val': {'index_pool': 'default.rgw.buckets.index', 'storage_classes': {'FROZEN': {'data_pool': 'default.rgw.buckets.data.frozen'}, 'LUKEWARM': {'data_pool': 'default.rgw.buckets.data.lukewarm'}, 'STANDARD': {'data_pool': 'default.rgw.buckets.data'}}, 'data_extra_pool': 'default.rgw.buckets.non-ec', 'index_type': 0, 'inline_data': True}}], 'realm_id': '', 'restore_pool': 'default.rgw.log:restore'} 2026-03-20T18:26:58.858 INFO:tasks.rgw:Configuring storage class = LUKEWARM 2026-03-20T18:26:58.858 INFO:tasks.util.rgw:rgwadmin: client.2 : ['zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM'] 2026-03-20T18:26:58.858 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.2', '--cluster', 'ceph', 'zonegroup', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM'] 2026-03-20T18:26:58.858 DEBUG:teuthology.orchestra.run.vm05:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.2 --cluster ceph zonegroup placement add --rgw-zone default --placement-id default-placement --storage-class LUKEWARM 2026-03-20T18:26:58.942 INFO:teuthology.orchestra.run.vm05.stderr:ignoring --setuser ceph since I am not root 2026-03-20T18:26:58.942 INFO:teuthology.orchestra.run.vm05.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T18:26:58.958 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.956+0000 7f1bfe34f900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T18:26:58.958 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.956+0000 7f1bfe34f900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T18:26:58.958 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:58.956+0000 7f1ba8fe1640 20 reqs_thread_entry: start 2026-03-20T18:26:59.006 INFO:teuthology.orchestra.run.vm05.stdout:[{"key":"default-placement","val":{"name":"default-placement","tags":[],"storage_classes":["FROZEN","LUKEWARM","STANDARD"]}}] 2026-03-20T18:26:59.006 DEBUG:tasks.util.rgw: json result: [{'key': 'default-placement', 'val': {'name': 'default-placement', 'tags': [], 'storage_classes': ['FROZEN', 'LUKEWARM', 'STANDARD']}}] 2026-03-20T18:26:59.006 INFO:tasks.util.rgw:rgwadmin: client.2 : ['zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM', '--data-pool', 'default.rgw.buckets.data.lukewarm'] 2026-03-20T18:26:59.006 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.2', '--cluster', 'ceph', 'zone', 'placement', 'add', '--rgw-zone', 'default', '--placement-id', 'default-placement', '--storage-class', 'LUKEWARM', '--data-pool', 'default.rgw.buckets.data.lukewarm'] 2026-03-20T18:26:59.006 DEBUG:teuthology.orchestra.run.vm05:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.2 --cluster ceph zone placement add --rgw-zone default --placement-id default-placement --storage-class LUKEWARM --data-pool default.rgw.buckets.data.lukewarm 2026-03-20T18:26:59.045 INFO:teuthology.orchestra.run.vm05.stderr:ignoring --setuser ceph since I am not root 2026-03-20T18:26:59.045 INFO:teuthology.orchestra.run.vm05.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T18:26:59.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:59.060+0000 7fce6253d900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T18:26:59.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:59.060+0000 7fce6253d900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T18:26:59.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-20T18:26:59.060+0000 7fce0d7e2640 20 reqs_thread_entry: start 2026-03-20T18:26:59.100 INFO:teuthology.orchestra.run.vm05.stdout:{"id":"06c127a6-ead2-4613-b3e9-3a45595f4b52","name":"default","domain_root":"default.rgw.meta:root","control_pool":"default.rgw.control","dedup_pool":"default.rgw.dedup","gc_pool":"default.rgw.log:gc","lc_pool":"default.rgw.log:lc","log_pool":"default.rgw.log","intent_log_pool":"default.rgw.log:intent","usage_log_pool":"default.rgw.log:usage","roles_pool":"default.rgw.meta:roles","reshard_pool":"default.rgw.log:reshard","user_keys_pool":"default.rgw.meta:users.keys","user_email_pool":"default.rgw.meta:users.email","user_swift_pool":"default.rgw.meta:users.swift","user_uid_pool":"default.rgw.meta:users.uid","otp_pool":"default.rgw.otp","notif_pool":"default.rgw.log:notif","topics_pool":"default.rgw.meta:topics","account_pool":"default.rgw.meta:accounts","group_pool":"default.rgw.meta:groups","system_key":{"access_key":"","secret_key":""},"placement_pools":[{"key":"default-placement","val":{"index_pool":"default.rgw.buckets.index","storage_classes":{"FROZEN":{"data_pool":"default.rgw.buckets.data.frozen"},"LUKEWARM":{"data_pool":"default.rgw.buckets.data.lukewarm"},"STANDARD":{"data_pool":"default.rgw.buckets.data"}},"data_extra_pool":"default.rgw.buckets.non-ec","index_type":0,"inline_data":true}}],"realm_id":"","restore_pool":"default.rgw.log:restore"} 2026-03-20T18:26:59.100 DEBUG:tasks.util.rgw: json result: {'id': '06c127a6-ead2-4613-b3e9-3a45595f4b52', 'name': 'default', 'domain_root': 'default.rgw.meta:root', 'control_pool': 'default.rgw.control', 'dedup_pool': 'default.rgw.dedup', 'gc_pool': 'default.rgw.log:gc', 'lc_pool': 'default.rgw.log:lc', 'log_pool': 'default.rgw.log', 'intent_log_pool': 'default.rgw.log:intent', 'usage_log_pool': 'default.rgw.log:usage', 'roles_pool': 'default.rgw.meta:roles', 'reshard_pool': 'default.rgw.log:reshard', 'user_keys_pool': 'default.rgw.meta:users.keys', 'user_email_pool': 'default.rgw.meta:users.email', 'user_swift_pool': 'default.rgw.meta:users.swift', 'user_uid_pool': 'default.rgw.meta:users.uid', 'otp_pool': 'default.rgw.otp', 'notif_pool': 'default.rgw.log:notif', 'topics_pool': 'default.rgw.meta:topics', 'account_pool': 'default.rgw.meta:accounts', 'group_pool': 'default.rgw.meta:groups', 'system_key': {'access_key': '', 'secret_key': ''}, 'placement_pools': [{'key': 'default-placement', 'val': {'index_pool': 'default.rgw.buckets.index', 'storage_classes': {'FROZEN': {'data_pool': 'default.rgw.buckets.data.frozen'}, 'LUKEWARM': {'data_pool': 'default.rgw.buckets.data.lukewarm'}, 'STANDARD': {'data_pool': 'default.rgw.buckets.data'}}, 'data_extra_pool': 'default.rgw.buckets.non-ec', 'index_type': 0, 'inline_data': True}}], 'realm_id': '', 'restore_pool': 'default.rgw.log:restore'} 2026-03-20T18:26:59.100 INFO:tasks.rgw:Starting rgw... 2026-03-20T18:26:59.100 INFO:tasks.rgw:rgw client.0 config is {} 2026-03-20T18:26:59.100 INFO:tasks.rgw:Using beast as radosgw frontend 2026-03-20T18:26:59.100 DEBUG:teuthology.orchestra.run.vm00:> sudo echo -n http://vm00.local:80 | sudo tee /home/ubuntu/cephtest/url_file 2026-03-20T18:26:59.128 INFO:teuthology.orchestra.run.vm00.stdout:http://vm00.local:80 2026-03-20T18:26:59.128 DEBUG:teuthology.orchestra.run.vm00:> sudo chown ceph /home/ubuntu/cephtest/url_file 2026-03-20T18:26:59.195 INFO:tasks.rgw.client.0:Restarting daemon 2026-03-20T18:26:59.195 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term radosgw --rgw-frontends 'beast port=80' -n client.0 --cluster ceph -k /etc/ceph/ceph.client.0.keyring --log-file /var/log/ceph/rgw.ceph.client.0.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.0.sock --foreground | sudo tee /var/log/ceph/rgw.ceph.client.0.stdout 2>&1 2026-03-20T18:26:59.236 INFO:tasks.rgw.client.0:Started 2026-03-20T18:26:59.236 INFO:tasks.rgw:rgw client.1 config is {} 2026-03-20T18:26:59.236 INFO:tasks.rgw:Using beast as radosgw frontend 2026-03-20T18:26:59.236 DEBUG:teuthology.orchestra.run.vm02:> sudo echo -n http://vm02.local:80 | sudo tee /home/ubuntu/cephtest/url_file 2026-03-20T18:26:59.265 INFO:teuthology.orchestra.run.vm02.stdout:http://vm02.local:80 2026-03-20T18:26:59.265 DEBUG:teuthology.orchestra.run.vm02:> sudo chown ceph /home/ubuntu/cephtest/url_file 2026-03-20T18:26:59.336 INFO:tasks.rgw.client.1:Restarting daemon 2026-03-20T18:26:59.336 DEBUG:teuthology.orchestra.run.vm02:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term radosgw --rgw-frontends 'beast port=80' -n client.1 --cluster ceph -k /etc/ceph/ceph.client.1.keyring --log-file /var/log/ceph/rgw.ceph.client.1.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.1.sock --foreground | sudo tee /var/log/ceph/rgw.ceph.client.1.stdout 2>&1 2026-03-20T18:26:59.380 INFO:tasks.rgw.client.1:Started 2026-03-20T18:26:59.380 INFO:tasks.rgw:rgw client.2 config is {} 2026-03-20T18:26:59.380 INFO:tasks.rgw:Using beast as radosgw frontend 2026-03-20T18:26:59.380 DEBUG:teuthology.orchestra.run.vm05:> sudo echo -n http://vm05.local:80 | sudo tee /home/ubuntu/cephtest/url_file 2026-03-20T18:26:59.409 INFO:teuthology.orchestra.run.vm05.stdout:http://vm05.local:80 2026-03-20T18:26:59.409 DEBUG:teuthology.orchestra.run.vm05:> sudo chown ceph /home/ubuntu/cephtest/url_file 2026-03-20T18:26:59.481 INFO:tasks.rgw.client.2:Restarting daemon 2026-03-20T18:26:59.481 DEBUG:teuthology.orchestra.run.vm05:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term radosgw --rgw-frontends 'beast port=80' -n client.2 --cluster ceph -k /etc/ceph/ceph.client.2.keyring --log-file /var/log/ceph/rgw.ceph.client.2.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.2.sock --foreground | sudo tee /var/log/ceph/rgw.ceph.client.2.stdout 2>&1 2026-03-20T18:26:59.524 INFO:tasks.rgw.client.2:Started 2026-03-20T18:26:59.524 INFO:tasks.rgw:Polling client.0 until it starts accepting connections on http://vm00.local:80/ 2026-03-20T18:26:59.524 DEBUG:teuthology.orchestra.run.vm00:> curl http://vm00.local:80/ 2026-03-20T18:26:59.567 DEBUG:teuthology.orchestra.run:got remote process result: 7 2026-03-20T18:26:59.567 INFO:teuthology.orchestra.run.vm00.stderr: % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-20T18:26:59.567 INFO:teuthology.orchestra.run.vm00.stderr: Dload Upload Total Spent Left Speed 2026-03-20T18:26:59.567 INFO:teuthology.orchestra.run.vm00.stderr: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 2026-03-20T18:26:59.567 INFO:teuthology.orchestra.run.vm00.stderr:curl: (7) Failed to connect to vm00.local port 80: Connection refused 2026-03-20T18:27:00.568 DEBUG:teuthology.orchestra.run.vm00:> curl http://vm00.local:80/ 2026-03-20T18:27:00.585 INFO:teuthology.orchestra.run.vm00.stderr: % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-20T18:27:00.586 INFO:teuthology.orchestra.run.vm00.stderr: Dload Upload Total Spent Left Speed 2026-03-20T18:27:00.587 INFO:teuthology.orchestra.run.vm00.stderr: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 187 0 187 0 0 182k 0 --:--:-- --:--:-- --:--:-- 182k 2026-03-20T18:27:00.587 INFO:teuthology.orchestra.run.vm00.stdout:anonymous 2026-03-20T18:27:00.588 INFO:tasks.rgw:Polling client.1 until it starts accepting connections on http://vm02.local:80/ 2026-03-20T18:27:00.588 DEBUG:teuthology.orchestra.run.vm02:> curl http://vm02.local:80/ 2026-03-20T18:27:00.605 INFO:teuthology.orchestra.run.vm02.stderr: % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-20T18:27:00.605 INFO:teuthology.orchestra.run.vm02.stderr: Dload Upload Total Spent Left Speed 2026-03-20T18:27:00.607 INFO:teuthology.orchestra.run.vm02.stderr: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 187 0 187 0 0 182k 0 --:--:-- --:--:-- --:--:-- 182k 2026-03-20T18:27:00.607 INFO:teuthology.orchestra.run.vm02.stdout:anonymous 2026-03-20T18:27:00.607 INFO:tasks.rgw:Polling client.2 until it starts accepting connections on http://vm05.local:80/ 2026-03-20T18:27:00.607 DEBUG:teuthology.orchestra.run.vm05:> curl http://vm05.local:80/ 2026-03-20T18:27:00.628 INFO:teuthology.orchestra.run.vm05.stderr: % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-20T18:27:00.628 INFO:teuthology.orchestra.run.vm05.stderr: Dload Upload Total Spent Left Speed 2026-03-20T18:27:00.630 INFO:teuthology.orchestra.run.vm05.stderr: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 187 0 187 0 0 93500 0 --:--:-- --:--:-- --:--:-- 93500 2026-03-20T18:27:00.631 INFO:teuthology.orchestra.run.vm05.stdout:anonymous 2026-03-20T18:27:00.631 INFO:teuthology.run_tasks:Running task tox... 2026-03-20T18:27:00.633 INFO:tasks.tox:Deploying tox from pip... 2026-03-20T18:27:00.634 DEBUG:teuthology.orchestra.run.vm00:> python3 -m venv /home/ubuntu/cephtest/tox-venv 2026-03-20T18:27:01.947 DEBUG:teuthology.orchestra.run.vm00:> source /home/ubuntu/cephtest/tox-venv/bin/activate && pip install tox 2026-03-20T18:27:02.263 INFO:teuthology.orchestra.run.vm00.stdout:Collecting tox 2026-03-20T18:27:02.296 INFO:teuthology.orchestra.run.vm00.stdout: Downloading tox-4.30.3-py3-none-any.whl (175 kB) 2026-03-20T18:27:02.354 INFO:teuthology.orchestra.run.vm00.stdout:Collecting cachetools>=6.1 2026-03-20T18:27:02.363 INFO:teuthology.orchestra.run.vm00.stdout: Downloading cachetools-6.2.6-py3-none-any.whl (11 kB) 2026-03-20T18:27:02.402 INFO:teuthology.orchestra.run.vm00.stdout:Collecting filelock>=3.18 2026-03-20T18:27:02.415 INFO:teuthology.orchestra.run.vm00.stdout: Downloading filelock-3.19.1-py3-none-any.whl (15 kB) 2026-03-20T18:27:02.453 INFO:teuthology.orchestra.run.vm00.stdout:Collecting platformdirs>=4.3.8 2026-03-20T18:27:02.462 INFO:teuthology.orchestra.run.vm00.stdout: Downloading platformdirs-4.4.0-py3-none-any.whl (18 kB) 2026-03-20T18:27:02.487 INFO:teuthology.orchestra.run.vm00.stdout:Collecting pyproject-api>=1.9.1 2026-03-20T18:27:02.497 INFO:teuthology.orchestra.run.vm00.stdout: Downloading pyproject_api-1.9.1-py3-none-any.whl (13 kB) 2026-03-20T18:27:02.540 INFO:teuthology.orchestra.run.vm00.stdout:Collecting chardet>=5.2 2026-03-20T18:27:02.550 INFO:teuthology.orchestra.run.vm00.stdout: Downloading chardet-5.2.0-py3-none-any.whl (199 kB) 2026-03-20T18:27:02.594 INFO:teuthology.orchestra.run.vm00.stdout:Collecting tomli>=2.2.1 2026-03-20T18:27:02.603 INFO:teuthology.orchestra.run.vm00.stdout: Downloading tomli-2.4.0-py3-none-any.whl (14 kB) 2026-03-20T18:27:02.638 INFO:teuthology.orchestra.run.vm00.stdout:Collecting typing-extensions>=4.14.1 2026-03-20T18:27:02.647 INFO:teuthology.orchestra.run.vm00.stdout: Downloading typing_extensions-4.15.0-py3-none-any.whl (44 kB) 2026-03-20T18:27:02.673 INFO:teuthology.orchestra.run.vm00.stdout:Collecting pluggy>=1.6 2026-03-20T18:27:02.683 INFO:teuthology.orchestra.run.vm00.stdout: Downloading pluggy-1.6.0-py3-none-any.whl (20 kB) 2026-03-20T18:27:02.771 INFO:teuthology.orchestra.run.vm00.stdout:Collecting virtualenv>=20.31.2 2026-03-20T18:27:02.781 INFO:teuthology.orchestra.run.vm00.stdout: Downloading virtualenv-21.2.0-py3-none-any.whl (5.8 MB) 2026-03-20T18:27:02.877 INFO:teuthology.orchestra.run.vm00.stdout:Collecting colorama>=0.4.6 2026-03-20T18:27:02.887 INFO:teuthology.orchestra.run.vm00.stdout: Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB) 2026-03-20T18:27:02.924 INFO:teuthology.orchestra.run.vm00.stdout:Collecting packaging>=25 2026-03-20T18:27:02.939 INFO:teuthology.orchestra.run.vm00.stdout: Downloading packaging-26.0-py3-none-any.whl (74 kB) 2026-03-20T18:27:02.997 INFO:teuthology.orchestra.run.vm00.stdout:Collecting distlib<1,>=0.3.7 2026-03-20T18:27:03.011 INFO:teuthology.orchestra.run.vm00.stdout: Downloading distlib-0.4.0-py2.py3-none-any.whl (469 kB) 2026-03-20T18:27:03.056 INFO:teuthology.orchestra.run.vm00.stdout:Collecting python-discovery>=1 2026-03-20T18:27:03.067 INFO:teuthology.orchestra.run.vm00.stdout: Downloading python_discovery-1.2.0-py3-none-any.whl (31 kB) 2026-03-20T18:27:03.132 INFO:teuthology.orchestra.run.vm00.stdout:Installing collected packages: platformdirs, filelock, typing-extensions, tomli, python-discovery, packaging, distlib, virtualenv, pyproject-api, pluggy, colorama, chardet, cachetools, tox 2026-03-20T18:27:03.504 INFO:teuthology.orchestra.run.vm00.stdout:Successfully installed cachetools-6.2.6 chardet-5.2.0 colorama-0.4.6 distlib-0.4.0 filelock-3.19.1 packaging-26.0 platformdirs-4.4.0 pluggy-1.6.0 pyproject-api-1.9.1 python-discovery-1.2.0 tomli-2.4.0 tox-4.30.3 typing-extensions-4.15.0 virtualenv-21.2.0 2026-03-20T18:27:03.589 INFO:teuthology.orchestra.run.vm00.stderr:WARNING: You are using pip version 21.3.1; however, version 26.0.1 is available. 2026-03-20T18:27:03.589 INFO:teuthology.orchestra.run.vm00.stderr:You should consider upgrading via the '/home/ubuntu/cephtest/tox-venv/bin/python3 -m pip install --upgrade pip' command. 2026-03-20T18:27:03.635 INFO:teuthology.run_tasks:Running task tox... 2026-03-20T18:27:03.638 INFO:tasks.tox:Deploying tox from pip... 2026-03-20T18:27:03.638 DEBUG:teuthology.orchestra.run.vm00:> python3 -m venv /home/ubuntu/cephtest/tox-venv 2026-03-20T18:27:04.320 DEBUG:teuthology.orchestra.run.vm00:> source /home/ubuntu/cephtest/tox-venv/bin/activate && pip install tox 2026-03-20T18:27:04.472 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: tox in ./cephtest/tox-venv/lib/python3.9/site-packages (4.30.3) 2026-03-20T18:27:04.477 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: tomli>=2.2.1 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (2.4.0) 2026-03-20T18:27:04.477 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: filelock>=3.18 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (3.19.1) 2026-03-20T18:27:04.477 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: colorama>=0.4.6 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (0.4.6) 2026-03-20T18:27:04.478 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: cachetools>=6.1 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (6.2.6) 2026-03-20T18:27:04.478 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: virtualenv>=20.31.2 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (21.2.0) 2026-03-20T18:27:04.478 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: platformdirs>=4.3.8 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (4.4.0) 2026-03-20T18:27:04.478 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: chardet>=5.2 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (5.2.0) 2026-03-20T18:27:04.478 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: pluggy>=1.6 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (1.6.0) 2026-03-20T18:27:04.479 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: pyproject-api>=1.9.1 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (1.9.1) 2026-03-20T18:27:04.479 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: typing-extensions>=4.14.1 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (4.15.0) 2026-03-20T18:27:04.479 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: packaging>=25 in ./cephtest/tox-venv/lib/python3.9/site-packages (from tox) (26.0) 2026-03-20T18:27:04.501 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: distlib<1,>=0.3.7 in ./cephtest/tox-venv/lib/python3.9/site-packages (from virtualenv>=20.31.2->tox) (0.4.0) 2026-03-20T18:27:04.501 INFO:teuthology.orchestra.run.vm00.stdout:Requirement already satisfied: python-discovery>=1 in ./cephtest/tox-venv/lib/python3.9/site-packages (from virtualenv>=20.31.2->tox) (1.2.0) 2026-03-20T18:27:04.519 INFO:teuthology.orchestra.run.vm00.stderr:WARNING: You are using pip version 21.3.1; however, version 26.0.1 is available. 2026-03-20T18:27:04.519 INFO:teuthology.orchestra.run.vm00.stderr:You should consider upgrading via the '/home/ubuntu/cephtest/tox-venv/bin/python3 -m pip install --upgrade pip' command. 2026-03-20T18:27:04.548 INFO:teuthology.run_tasks:Running task dedup-tests... 2026-03-20T18:27:04.552 DEBUG:tasks.dedup_tests:config is {'client.0': {'rgw_server': 'client.0'}} 2026-03-20T18:27:04.552 INFO:tasks.dedup_tests:Downloading dedup-tests... 2026-03-20T18:27:04.552 INFO:tasks.dedup_tests:Using branch tt-tentacle from https://github.com/kshtsk/ceph.git for dedup tests 2026-03-20T18:27:04.552 DEBUG:teuthology.orchestra.run.vm00:> git clone -b tt-tentacle https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/ceph 2026-03-20T18:27:04.571 INFO:teuthology.orchestra.run.vm00.stderr:Cloning into '/home/ubuntu/cephtest/ceph'... 2026-03-20T18:28:02.913 INFO:tasks.dedup_tests:Creating rgw user... 2026-03-20T18:28:02.913 DEBUG:tasks.dedup_tests:Creating user foo.client.0 on client.0 2026-03-20T18:28:02.914 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user create --uid foo.client.0 --display-name 'Mr. foo.client.0' --access-key XZMCULEJKLZOAEBCAXPT --secret +Ioypmf3RrJkydHk3Iif6FRWOIF/+pZ95iMQ8F6MWzvp0d9E/cEbDw== --cluster ceph 2026-03-20T18:28:02.996 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-03-20T18:28:02.996 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T18:28:03.012 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.011+0000 7f43e9b1f900 20 rados->read ofs=0 len=0 2026-03-20T18:28:03.013 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.012+0000 7f43e9b1f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:28:03.013 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.012+0000 7f43e9b1f900 20 realm 2026-03-20T18:28:03.013 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.012+0000 7f43e9b1f900 20 rados->read ofs=0 len=0 2026-03-20T18:28:03.013 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.012+0000 7f43e9b1f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:28:03.013 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.012+0000 7f43e9b1f900 4 RGWPeriod::init failed to init realm id : (2) No such file or directory 2026-03-20T18:28:03.013 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.012+0000 7f43e9b1f900 20 rados->read ofs=0 len=0 2026-03-20T18:28:03.013 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.012+0000 7f43e9b1f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:28:03.013 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.012+0000 7f43e9b1f900 20 rados->read ofs=0 len=0 2026-03-20T18:28:03.014 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.013+0000 7f43e9b1f900 20 rados_obj.operate() r=0 bl.length=46 2026-03-20T18:28:03.014 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.013+0000 7f43e9b1f900 20 rados->read ofs=0 len=0 2026-03-20T18:28:03.014 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.013+0000 7f43e9b1f900 20 rados_obj.operate() r=0 bl.length=1190 2026-03-20T18:28:03.014 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.013+0000 7f43e9b1f900 20 searching for the correct realm 2026-03-20T18:28:03.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.022+0000 7f43e9b1f900 20 RGWRados::pool_iterate: got default.zonegroup. 2026-03-20T18:28:03.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.022+0000 7f43e9b1f900 20 RGWRados::pool_iterate: got zonegroup_info.1f41abd1-1863-43a5-a3b4-57a42098bd37 2026-03-20T18:28:03.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.022+0000 7f43e9b1f900 20 RGWRados::pool_iterate: got zone_info.06c127a6-ead2-4613-b3e9-3a45595f4b52 2026-03-20T18:28:03.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.022+0000 7f43e9b1f900 20 RGWRados::pool_iterate: got default.zone. 2026-03-20T18:28:03.023 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.022+0000 7f43e9b1f900 20 RGWRados::pool_iterate: got zone_names.default 2026-03-20T18:28:03.024 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.022+0000 7f43e9b1f900 20 RGWRados::pool_iterate: got zonegroups_names.default 2026-03-20T18:28:03.024 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.022+0000 7f43e9b1f900 20 rados->read ofs=0 len=0 2026-03-20T18:28:03.024 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.022+0000 7f43e9b1f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:28:03.024 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.022+0000 7f43e9b1f900 20 rados->read ofs=0 len=0 2026-03-20T18:28:03.024 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.023+0000 7f43e9b1f900 20 rados_obj.operate() r=0 bl.length=46 2026-03-20T18:28:03.024 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.023+0000 7f43e9b1f900 20 rados->read ofs=0 len=0 2026-03-20T18:28:03.024 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.023+0000 7f43e9b1f900 20 rados_obj.operate() r=0 bl.length=470 2026-03-20T18:28:03.024 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.023+0000 7f43e9b1f900 20 zone default found 2026-03-20T18:28:03.024 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.023+0000 7f43e9b1f900 4 Realm: () 2026-03-20T18:28:03.024 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.023+0000 7f43e9b1f900 4 ZoneGroup: default (1f41abd1-1863-43a5-a3b4-57a42098bd37) 2026-03-20T18:28:03.024 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.023+0000 7f43e9b1f900 4 Zone: default (06c127a6-ead2-4613-b3e9-3a45595f4b52) 2026-03-20T18:28:03.024 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.023+0000 7f43e9b1f900 10 cannot find current period zonegroup using local zonegroup configuration 2026-03-20T18:28:03.024 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.023+0000 7f43e9b1f900 20 zonegroup default 2026-03-20T18:28:03.024 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.023+0000 7f43e9b1f900 20 rados->read ofs=0 len=0 2026-03-20T18:28:03.024 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.023+0000 7f43e9b1f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:28:03.024 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.023+0000 7f43e9b1f900 20 rados->read ofs=0 len=0 2026-03-20T18:28:03.024 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.023+0000 7f43e9b1f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:28:03.024 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.023+0000 7f43e9b1f900 20 rados->read ofs=0 len=0 2026-03-20T18:28:03.025 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.023+0000 7f43e9b1f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:28:03.025 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.023+0000 7f43e9b1f900 20 started sync module instance, tier type = 2026-03-20T18:28:03.025 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.023+0000 7f43e9b1f900 20 started zone id=06c127a6-ead2-4613-b3e9-3a45595f4b52 (name=default) with tier type = 2026-03-20T18:28:03.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.027+0000 7f43e9b1f900 20 add_watcher() i=0 2026-03-20T18:28:03.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.027+0000 7f43e9b1f900 20 add_watcher() i=6 2026-03-20T18:28:03.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.027+0000 7f43e9b1f900 20 add_watcher() i=1 2026-03-20T18:28:03.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.027+0000 7f43e9b1f900 20 add_watcher() i=3 2026-03-20T18:28:03.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.028+0000 7f43e9b1f900 20 add_watcher() i=2 2026-03-20T18:28:03.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.028+0000 7f43e9b1f900 20 add_watcher() i=4 2026-03-20T18:28:03.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.028+0000 7f43e9b1f900 20 add_watcher() i=5 2026-03-20T18:28:03.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.028+0000 7f43e9b1f900 20 add_watcher() i=7 2026-03-20T18:28:03.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.028+0000 7f43e9b1f900 2 all 8 watchers are set, enabling cache 2026-03-20T18:28:03.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.030+0000 7f43e9b1f900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T18:28:03.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.030+0000 7f43e9b1f900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T18:28:03.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.030+0000 7f43e9b1f900 5 note: GC not initialized 2026-03-20T18:28:03.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.030+0000 7f4392fe5640 20 reqs_thread_entry: start 2026-03-20T18:28:03.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.068+0000 7f43e9b1f900 20 init_complete bucket index max shards: 11 2026-03-20T18:28:03.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.068+0000 7f43e9b1f900 20 Filter name: none 2026-03-20T18:28:03.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.068+0000 7f4390fe1640 20 reqs_thread_entry: start 2026-03-20T18:28:03.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.068+0000 7f43e9b1f900 10 cache get: name=default.rgw.meta+users.uid+foo.client.0 : miss 2026-03-20T18:28:03.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.068+0000 7f43e9b1f900 20 rados->read ofs=0 len=0 2026-03-20T18:28:03.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.069+0000 7f43e9b1f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:28:03.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.069+0000 7f43e9b1f900 10 cache put: name=default.rgw.meta+users.uid+foo.client.0 info.flags=0x0 2026-03-20T18:28:03.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.069+0000 7f43e9b1f900 10 adding default.rgw.meta+users.uid+foo.client.0 to cache LRU end 2026-03-20T18:28:03.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.069+0000 7f43e9b1f900 10 cache get: name=default.rgw.meta+users.keys+XZMCULEJKLZOAEBCAXPT : miss 2026-03-20T18:28:03.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.069+0000 7f43e9b1f900 20 rados->read ofs=0 len=0 2026-03-20T18:28:03.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.069+0000 7f43e9b1f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T18:28:03.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.069+0000 7f43e9b1f900 10 cache put: name=default.rgw.meta+users.keys+XZMCULEJKLZOAEBCAXPT info.flags=0x0 2026-03-20T18:28:03.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.069+0000 7f43e9b1f900 10 adding default.rgw.meta+users.keys+XZMCULEJKLZOAEBCAXPT to cache LRU end 2026-03-20T18:28:03.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.069+0000 7f43e9b1f900 10 cache get: name=default.rgw.meta+users.keys+XZMCULEJKLZOAEBCAXPT : hit (negative entry) 2026-03-20T18:28:03.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.069+0000 7f43e9b1f900 10 cache get: name=default.rgw.meta+users.keys+XZMCULEJKLZOAEBCAXPT : hit (negative entry) 2026-03-20T18:28:03.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.070+0000 7f43e9b1f900 10 cache put: name=default.rgw.meta+users.uid+foo.client.0 info.flags=0x17 2026-03-20T18:28:03.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.070+0000 7f43e9b1f900 10 moving default.rgw.meta+users.uid+foo.client.0 to cache LRU end 2026-03-20T18:28:03.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.070+0000 7f43e9b1f900 10 distributing notification oid=default.rgw.control:notify.0 cni=[op: 0, obj: default.rgw.meta:users.uid:foo.client.0, ofs0, ns] 2026-03-20T18:28:03.072 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.071+0000 7f43c0ff9640 10 rgw watcher librados: RGWWatcher::handle_notify() notify_id 171798691840 cookie 94610639279296 notifier 4734 bl.length()=628 2026-03-20T18:28:03.072 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.071+0000 7f43c0ff9640 10 rgw watcher librados: cache put: name=default.rgw.meta+users.uid+foo.client.0 info.flags=0x17 2026-03-20T18:28:03.072 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.071+0000 7f43c0ff9640 10 rgw watcher librados: moving default.rgw.meta+users.uid+foo.client.0 to cache LRU end 2026-03-20T18:28:03.073 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.072+0000 7f43e9b1f900 10 cache put: name=default.rgw.meta+users.keys+XZMCULEJKLZOAEBCAXPT info.flags=0x7 2026-03-20T18:28:03.073 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.072+0000 7f43e9b1f900 10 moving default.rgw.meta+users.keys+XZMCULEJKLZOAEBCAXPT to cache LRU end 2026-03-20T18:28:03.073 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.072+0000 7f43e9b1f900 10 distributing notification oid=default.rgw.control:notify.2 cni=[op: 0, obj: default.rgw.meta:users.keys:XZMCULEJKLZOAEBCAXPT, ofs0, ns] 2026-03-20T18:28:03.074 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.073+0000 7f43c0ff9640 10 rgw watcher librados: RGWWatcher::handle_notify() notify_id 171798691840 cookie 94610639287152 notifier 4734 bl.length()=186 2026-03-20T18:28:03.074 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.073+0000 7f43c0ff9640 10 rgw watcher librados: cache put: name=default.rgw.meta+users.keys+XZMCULEJKLZOAEBCAXPT info.flags=0x7 2026-03-20T18:28:03.074 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.073+0000 7f43c0ff9640 10 rgw watcher librados: moving default.rgw.meta+users.keys+XZMCULEJKLZOAEBCAXPT to cache LRU end 2026-03-20T18:28:03.074 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-20T18:28:03.074 INFO:teuthology.orchestra.run.vm00.stdout: "user_id": "foo.client.0", 2026-03-20T18:28:03.074 INFO:teuthology.orchestra.run.vm00.stdout: "display_name": "Mr. foo.client.0", 2026-03-20T18:28:03.074 INFO:teuthology.orchestra.run.vm00.stdout: "email": "", 2026-03-20T18:28:03.074 INFO:teuthology.orchestra.run.vm00.stdout: "suspended": 0, 2026-03-20T18:28:03.074 INFO:teuthology.orchestra.run.vm00.stdout: "max_buckets": 1000, 2026-03-20T18:28:03.074 INFO:teuthology.orchestra.run.vm00.stdout: "subusers": [], 2026-03-20T18:28:03.074 INFO:teuthology.orchestra.run.vm00.stdout: "keys": [ 2026-03-20T18:28:03.074 INFO:teuthology.orchestra.run.vm00.stdout: { 2026-03-20T18:28:03.074 INFO:teuthology.orchestra.run.vm00.stdout: "user": "foo.client.0", 2026-03-20T18:28:03.074 INFO:teuthology.orchestra.run.vm00.stdout: "access_key": "XZMCULEJKLZOAEBCAXPT", 2026-03-20T18:28:03.074 INFO:teuthology.orchestra.run.vm00.stdout: "secret_key": "+Ioypmf3RrJkydHk3Iif6FRWOIF/+pZ95iMQ8F6MWzvp0d9E/cEbDw==", 2026-03-20T18:28:03.074 INFO:teuthology.orchestra.run.vm00.stdout: "active": true, 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "create_date": "2026-03-20T18:28:03.070354Z" 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: } 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: ], 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "swift_keys": [], 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "caps": [], 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "op_mask": "read, write, delete", 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "default_placement": "", 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "default_storage_class": "", 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "placement_tags": [], 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "bucket_quota": { 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "enabled": false, 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "check_on_raw": false, 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "max_size": -1, 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "max_size_kb": 0, 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "max_objects": -1 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "user_quota": { 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "enabled": false, 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "check_on_raw": false, 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "max_size": -1, 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "max_size_kb": 0, 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "max_objects": -1 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: }, 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "temp_url_keys": [], 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "type": "rgw", 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "mfa_ids": [], 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "account_id": "", 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "path": "/", 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "create_date": "2026-03-20T18:28:03.070348Z", 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "tags": [], 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: "group_ids": [] 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-20T18:28:03.075 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:28:03.079 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.077+0000 7f43e9b1f900 20 remove_watcher() i=1 2026-03-20T18:28:03.079 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.077+0000 7f43e9b1f900 2 removed watcher, disabling cache 2026-03-20T18:28:03.079 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.078+0000 7f43e9b1f900 20 remove_watcher() i=3 2026-03-20T18:28:03.079 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.078+0000 7f43e9b1f900 20 remove_watcher() i=0 2026-03-20T18:28:03.079 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.078+0000 7f43e9b1f900 20 remove_watcher() i=2 2026-03-20T18:28:03.079 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.078+0000 7f43e9b1f900 20 remove_watcher() i=5 2026-03-20T18:28:03.079 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.078+0000 7f43e9b1f900 20 remove_watcher() i=6 2026-03-20T18:28:03.079 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.078+0000 7f43e9b1f900 20 remove_watcher() i=7 2026-03-20T18:28:03.079 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T18:28:03.078+0000 7f43e9b1f900 20 remove_watcher() i=4 2026-03-20T18:28:03.084 INFO:tasks.dedup_tests:Configuring dedup-tests... 2026-03-20T18:28:03.084 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T18:28:03.084 DEBUG:teuthology.orchestra.run.vm00:> dd of=/home/ubuntu/cephtest/ceph/src/test/rgw/dedup/deduptests.client.0.conf 2026-03-20T18:28:03.141 INFO:tasks.dedup_tests:Running dedup-tests... 2026-03-20T18:28:03.141 DEBUG:teuthology.orchestra.run.vm00:dedup tests against rgw> source /home/ubuntu/cephtest/tox-venv/bin/activate && cd /home/ubuntu/cephtest/ceph/src/test/rgw/dedup/ && DEDUPTESTS_CONF=./deduptests.client.0.conf tox -- -v -m 'basic_test or request_test or example_test' 2026-03-20T18:28:03.544 INFO:teuthology.orchestra.run.vm00.stdout:py: install_deps> python -I -m pip install -r requirements.txt 2026-03-20T18:28:06.229 INFO:teuthology.orchestra.run.vm00.stdout:py: commands[0]> pytest -v -m 'basic_test or request_test or example_test' 2026-03-20T18:28:06.318 INFO:teuthology.orchestra.run.vm00.stdout:============================= test session starts ============================== 2026-03-20T18:28:06.318 INFO:teuthology.orchestra.run.vm00.stdout:platform linux -- Python 3.9.25, pytest-8.4.2, pluggy-1.6.0 -- /home/ubuntu/cephtest/ceph/src/test/rgw/dedup/.tox/py/bin/python 2026-03-20T18:28:06.318 INFO:teuthology.orchestra.run.vm00.stdout:cachedir: .tox/py/.pytest_cache 2026-03-20T18:28:06.318 INFO:teuthology.orchestra.run.vm00.stdout:rootdir: /home/ubuntu/cephtest/ceph/src/test/rgw/dedup 2026-03-20T18:28:06.318 INFO:teuthology.orchestra.run.vm00.stdout:configfile: pytest.ini 2026-03-20T18:28:06.424 INFO:teuthology.orchestra.run.vm00.stdout:collecting ... collected 34 items 2026-03-20T18:28:06.424 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:28:06.550 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_etag_corruption PASSED [ 2%] 2026-03-20T18:28:06.550 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_md5_collisions PASSED [ 5%] 2026-03-20T18:28:06.550 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_small PASSED [ 8%] 2026-03-20T18:28:06.551 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_small_with_tenants PASSED [ 11%] 2026-03-20T18:28:06.551 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_0_with_tenants PASSED [ 14%] 2026-03-20T18:28:06.551 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_0 PASSED [ 17%] 2026-03-20T18:28:06.551 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_1_with_tenants PASSED [ 20%] 2026-03-20T18:28:06.552 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_1 PASSED [ 23%] 2026-03-20T18:28:06.552 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_2_with_tenants PASSED [ 26%] 2026-03-20T18:28:06.552 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_2 PASSED [ 29%] 2026-03-20T18:28:06.553 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_with_remove_multi_tenants PASSED [ 32%] 2026-03-20T18:28:06.553 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_with_remove PASSED [ 35%] 2026-03-20T18:28:06.553 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_multipart_with_tenants PASSED [ 38%] 2026-03-20T18:28:06.554 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_multipart PASSED [ 41%] 2026-03-20T18:28:06.554 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_basic_with_tenants PASSED [ 44%] 2026-03-20T18:28:06.554 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_basic PASSED [ 47%] 2026-03-20T18:28:06.555 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_small_multipart_with_tenants PASSED [ 50%] 2026-03-20T18:28:06.555 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_small_multipart PASSED [ 52%] 2026-03-20T18:28:06.555 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_large_scale_with_tenants PASSED [ 55%] 2026-03-20T18:28:06.555 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_large_scale PASSED [ 58%] 2026-03-20T18:28:06.556 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_empty_bucket PASSED [ 61%] 2026-03-20T18:28:06.556 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_inc_loop_with_tenants PASSED [ 64%] 2026-03-20T18:28:13.060 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small_with_tenants 2026-03-20T18:28:13.060 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T18:28:13.060 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-03-20T18:28:13.631 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 67%] 2026-03-20T18:30:53.654 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_multipart 2026-03-20T18:30:53.654 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T18:30:53.654 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-03-20T18:30:58.320 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 70%] 2026-03-20T18:31:07.793 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_basic 2026-03-20T18:31:07.793 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T18:31:07.793 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-03-20T18:31:08.348 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 73%] 2026-03-20T18:31:18.471 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small_multipart 2026-03-20T18:31:18.471 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T18:31:18.471 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-03-20T18:31:18.994 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 76%] 2026-03-20T18:31:24.796 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small 2026-03-20T18:31:24.796 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T18:31:24.796 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-03-20T18:31:25.277 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 79%] 2026-03-20T18:31:39.596 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small_large_mix 2026-03-20T18:31:39.596 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T18:31:39.596 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-03-20T18:31:40.757 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 82%] 2026-03-20T18:32:00.641 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_basic_with_tenants 2026-03-20T18:32:00.641 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T18:32:00.641 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-03-20T18:32:01.556 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 85%] 2026-03-20T18:33:25.619 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_multipart_with_tenants 2026-03-20T18:33:25.619 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T18:33:25.619 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-03-20T18:33:28.886 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 88%] 2026-03-20T18:33:39.932 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small_multipart_with_tenants 2026-03-20T18:33:39.932 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T18:33:39.932 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-03-20T18:33:40.763 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 91%] 2026-03-20T18:41:08.423 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_large_scale_with_tenants 2026-03-20T18:41:08.423 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T18:41:08.423 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1096 dedup completed in 5 seconds 2026-03-20T18:41:08.423 INFO:teuthology.orchestra.run.vm00.stdout:INFO dedup.test_dedup:test_dedup.py:1288 [64] obj_count=65494, upload=428(sec), exec=5(sec), verify=0(sec) 2026-03-20T18:41:29.024 INFO:tasks.ceph.mon.a.vm00.stderr:2026-03-20T18:41:29.023+0000 7f8209884640 -1 log_channel(cluster) log [ERR] : Health check failed: mon c is very low on available space (MON_DISK_CRIT) 2026-03-20T18:41:34.119 INFO:tasks.ceph.mon.a.vm00.stderr:2026-03-20T18:41:34.117+0000 7f820c089640 -1 log_channel(cluster) log [ERR] : Health check update: mons a,c are very low on available space (MON_DISK_CRIT) 2026-03-20T18:41:44.453 INFO:tasks.ceph.mon.a.vm00.stderr:2026-03-20T18:41:44.452+0000 7f820c089640 -1 log_channel(cluster) log [ERR] : Health check update: mons a,b,c are very low on available space (MON_DISK_CRIT) 2026-03-20T18:41:54.121 INFO:tasks.ceph.mon.a.vm00.stderr:2026-03-20T18:41:54.120+0000 7f820c089640 -1 log_channel(cluster) log [ERR] : Health check update: mons a,c are very low on available space (MON_DISK_CRIT) 2026-03-20T18:42:00.025 INFO:tasks.rgw.client.2.vm05.stdout:2026-03-20T18:42:00.024+0000 7fecf0e91640 -1 restore: virtual void* rgw::restore::Restore::RestoreWorker::entry(): ERROR: restore process() returned error r=-16 2026-03-20T18:42:52.660 INFO:teuthology.orchestra.run.vm00.stdout:PASSED [ 94%] 2026-03-20T18:43:04.410 INFO:tasks.ceph.mon.a.vm00.stderr:2026-03-20T18:43:04.408+0000 7f820c089640 -1 log_channel(cluster) log [ERR] : Health check update: mons a,b,c are very low on available space (MON_DISK_CRIT) 2026-03-20T18:43:32.476 INFO:tasks.ceph.osd.0.vm00.stderr:problem writing to /var/log/ceph/ceph-osd.0.log: (28) No space left on device 2026-03-20T18:43:32.477 INFO:tasks.rgw.client.0.vm00.stdout:problem writing to /var/log/ceph/rgw.ceph.client.0.log: (28) No space left on device 2026-03-20T18:43:32.478 INFO:tasks.ceph.osd.1.vm00.stderr:problem writing to /var/log/ceph/ceph-osd.1.log: (28) No space left on device 2026-03-20T18:43:32.478 INFO:tasks.ceph.osd.3.vm00.stderr:problem writing to /var/log/ceph/ceph-osd.3.log: (28) No space left on device 2026-03-20T18:43:32.480 INFO:tasks.ceph.osd.2.vm00.stderr:problem writing to /var/log/ceph/ceph-osd.2.log: (28) No space left on device 2026-03-20T18:43:32.520 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:43:32.857 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:43:34.292 INFO:tasks.ceph.osd.1.vm00.stderr:problem writing to /var/log/ceph/ceph-osd.1.log: (28) No space left on device 2026-03-20T18:43:34.292 INFO:tasks.ceph.osd.3.vm00.stderr:problem writing to /var/log/ceph/ceph-osd.3.log: (28) No space left on device 2026-03-20T18:43:34.294 INFO:tasks.ceph.osd.0.vm00.stderr:problem writing to /var/log/ceph/ceph-osd.0.log: (28) No space left on device 2026-03-20T18:43:35.928 INFO:tasks.ceph.mgr.y.vm00.stderr:problem writing to /var/log/ceph/ceph-mgr.y.log: (28) No space left on device 2026-03-20T18:44:13.917 INFO:tasks.ceph.osd.4.vm02.stderr:problem writing to /var/log/ceph/ceph-osd.4.log: (28) No space left on device 2026-03-20T18:44:13.917 INFO:tasks.ceph.osd.4.vm02.stderr:problem writing to /var/log/ceph/ceph-osd.4.log: (28) No space left on device 2026-03-20T18:44:13.918 INFO:tasks.ceph.osd.6.vm02.stderr:problem writing to /var/log/ceph/ceph-osd.6.log: (28) No space left on device 2026-03-20T18:44:13.919 INFO:tasks.ceph.osd.7.vm02.stderr:problem writing to /var/log/ceph/ceph-osd.7.log: (28) No space left on device 2026-03-20T18:44:13.920 INFO:tasks.ceph.osd.5.vm02.stderr:problem writing to /var/log/ceph/ceph-osd.5.log: (28) No space left on device 2026-03-20T18:44:14.260 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:44:14.397 INFO:tasks.ceph.mgr.x.vm02.stderr:problem writing to /var/log/ceph/ceph-mgr.x.log: (28) No space left on device 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr:2026-03-20T18:45:10.045+0000 7f7e17726640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-c/store.db/000025.log: No space left on device code =  Rocksdb transaction: 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr:PutCF( prefix = monitor key = 'connectivity_scores' value size = 238) 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f7e17726640 time 2026-03-20T18:45:10.046692+0000 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f7e1fd911f3] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 2: ceph-mon(+0x2a69bc) [0x55c53815f9bc] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 3: (Elector::persist_connectivity_scores()+0x135) [0x55c538243175] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 4: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x55c53824c211] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x55c5382485e0] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 6: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x55c538249197] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x55c5381b873d] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 8: (Monitor::_ms_dispatch(Message*)+0x786) [0x55c5381acec6] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 9: ceph-mon(+0x2b3b8c) [0x55c53816cb8c] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 10: (DispatchQueue::entry()+0x4a8) [0x7f7e20008518] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 11: /usr/lib64/ceph/libceph-common.so.2(+0x49cc11) [0x7f7e2009cc11] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 12: /lib64/libc.so.6(+0x8b2fa) [0x7f7e1ee8b2fa] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 13: /lib64/libc.so.6(+0x1103d0) [0x7f7e1ef103d0] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr:*** Caught signal (Aborted) ** 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: in thread 7f7e17726640 thread_name:ms_dispatch 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr:2026-03-20T18:45:10.046+0000 7f7e17726640 -1 /ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f7e17726640 time 2026-03-20T18:45:10.046692+0000 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f7e1fd911f3] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 2: ceph-mon(+0x2a69bc) [0x55c53815f9bc] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 3: (Elector::persist_connectivity_scores()+0x135) [0x55c538243175] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 4: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x55c53824c211] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x55c5382485e0] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 6: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x55c538249197] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x55c5381b873d] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 8: (Monitor::_ms_dispatch(Message*)+0x786) [0x55c5381acec6] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 9: ceph-mon(+0x2b3b8c) [0x55c53816cb8c] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 10: (DispatchQueue::entry()+0x4a8) [0x7f7e20008518] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 11: /usr/lib64/ceph/libceph-common.so.2(+0x49cc11) [0x7f7e2009cc11] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 12: /lib64/libc.so.6(+0x8b2fa) [0x7f7e1ee8b2fa] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 13: /lib64/libc.so.6(+0x1103d0) [0x7f7e1ef103d0] 2026-03-20T18:45:10.047 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f7e1ee3fc30] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7f7e1ee8d03c] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 3: raise() 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 4: abort() 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f7e1fd912b0] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 6: ceph-mon(+0x2a69bc) [0x55c53815f9bc] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (Elector::persist_connectivity_scores()+0x135) [0x55c538243175] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 8: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x55c53824c211] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 9: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x55c5382485e0] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 10: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x55c538249197] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 11: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x55c5381b873d] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 12: (Monitor::_ms_dispatch(Message*)+0x786) [0x55c5381acec6] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 13: ceph-mon(+0x2b3b8c) [0x55c53816cb8c] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 14: (DispatchQueue::entry()+0x4a8) [0x7f7e20008518] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 15: /usr/lib64/ceph/libceph-common.so.2(+0x49cc11) [0x7f7e2009cc11] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 16: /lib64/libc.so.6(+0x8b2fa) [0x7f7e1ee8b2fa] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 17: /lib64/libc.so.6(+0x1103d0) [0x7f7e1ef103d0] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr:2026-03-20T18:45:10.047+0000 7f7e17726640 -1 *** Caught signal (Aborted) ** 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: in thread 7f7e17726640 thread_name:ms_dispatch 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f7e1ee3fc30] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7f7e1ee8d03c] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 3: raise() 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 4: abort() 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f7e1fd912b0] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 6: ceph-mon(+0x2a69bc) [0x55c53815f9bc] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (Elector::persist_connectivity_scores()+0x135) [0x55c538243175] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 8: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x55c53824c211] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 9: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x55c5382485e0] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 10: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x55c538249197] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 11: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x55c5381b873d] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 12: (Monitor::_ms_dispatch(Message*)+0x786) [0x55c5381acec6] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 13: ceph-mon(+0x2b3b8c) [0x55c53816cb8c] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 14: (DispatchQueue::entry()+0x4a8) [0x7f7e20008518] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 15: /usr/lib64/ceph/libceph-common.so.2(+0x49cc11) [0x7f7e2009cc11] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 16: /lib64/libc.so.6(+0x8b2fa) [0x7f7e1ee8b2fa] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 17: /lib64/libc.so.6(+0x1103d0) [0x7f7e1ef103d0] 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T18:45:10.048 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: -2> 2026-03-20T18:45:10.045+0000 7f7e17726640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-c/store.db/000025.log: No space left on device code =  Rocksdb transaction: 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr:PutCF( prefix = monitor key = 'connectivity_scores' value size = 238) 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: -1> 2026-03-20T18:45:10.046+0000 7f7e17726640 -1 /ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f7e17726640 time 2026-03-20T18:45:10.046692+0000 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f7e1fd911f3] 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: 2: ceph-mon(+0x2a69bc) [0x55c53815f9bc] 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: 3: (Elector::persist_connectivity_scores()+0x135) [0x55c538243175] 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: 4: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x55c53824c211] 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x55c5382485e0] 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: 6: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x55c538249197] 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x55c5381b873d] 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: 8: (Monitor::_ms_dispatch(Message*)+0x786) [0x55c5381acec6] 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: 9: ceph-mon(+0x2b3b8c) [0x55c53816cb8c] 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: 10: (DispatchQueue::entry()+0x4a8) [0x7f7e20008518] 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: 11: /usr/lib64/ceph/libceph-common.so.2(+0x49cc11) [0x7f7e2009cc11] 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: 12: /lib64/libc.so.6(+0x8b2fa) [0x7f7e1ee8b2fa] 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: 13: /lib64/libc.so.6(+0x1103d0) [0x7f7e1ef103d0] 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: 0> 2026-03-20T18:45:10.047+0000 7f7e17726640 -1 *** Caught signal (Aborted) ** 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: in thread 7f7e17726640 thread_name:ms_dispatch 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f7e1ee3fc30] 2026-03-20T18:45:10.075 INFO:tasks.ceph.mon.c.vm00.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7f7e1ee8d03c] 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr: 3: raise() 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr: 4: abort() 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f7e1fd912b0] 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr: 6: ceph-mon(+0x2a69bc) [0x55c53815f9bc] 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (Elector::persist_connectivity_scores()+0x135) [0x55c538243175] 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr: 8: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x55c53824c211] 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr: 9: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x55c5382485e0] 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr: 10: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x55c538249197] 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr: 11: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x55c5381b873d] 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr: 12: (Monitor::_ms_dispatch(Message*)+0x786) [0x55c5381acec6] 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr: 13: ceph-mon(+0x2b3b8c) [0x55c53816cb8c] 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr: 14: (DispatchQueue::entry()+0x4a8) [0x7f7e20008518] 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr: 15: /usr/lib64/ceph/libceph-common.so.2(+0x49cc11) [0x7f7e2009cc11] 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr: 16: /lib64/libc.so.6(+0x8b2fa) [0x7f7e1ee8b2fa] 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr: 17: /lib64/libc.so.6(+0x1103d0) [0x7f7e1ef103d0] 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.076 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.077 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.078 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.078 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.078 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.078 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.078 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.078 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.078 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.078 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.078 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.078 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.078 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.078 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.078 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.078 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.078 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.078 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.078 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.078 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.079 INFO:tasks.ceph.mon.c.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.c.log: (28) No space left on device 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: -9999> 2026-03-20T18:45:10.045+0000 7f7e17726640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-c/store.db/000025.log: No space left on device code =  Rocksdb transaction: 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr:PutCF( prefix = monitor key = 'connectivity_scores' value size = 238) 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: -9998> 2026-03-20T18:45:10.046+0000 7f7e17726640 -1 /ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f7e17726640 time 2026-03-20T18:45:10.046692+0000 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f7e1fd911f3] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 2: ceph-mon(+0x2a69bc) [0x55c53815f9bc] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 3: (Elector::persist_connectivity_scores()+0x135) [0x55c538243175] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 4: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x55c53824c211] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x55c5382485e0] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 6: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x55c538249197] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x55c5381b873d] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 8: (Monitor::_ms_dispatch(Message*)+0x786) [0x55c5381acec6] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 9: ceph-mon(+0x2b3b8c) [0x55c53816cb8c] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 10: (DispatchQueue::entry()+0x4a8) [0x7f7e20008518] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 11: /usr/lib64/ceph/libceph-common.so.2(+0x49cc11) [0x7f7e2009cc11] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 12: /lib64/libc.so.6(+0x8b2fa) [0x7f7e1ee8b2fa] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 13: /lib64/libc.so.6(+0x1103d0) [0x7f7e1ef103d0] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: -9997> 2026-03-20T18:45:10.047+0000 7f7e17726640 -1 *** Caught signal (Aborted) ** 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: in thread 7f7e17726640 thread_name:ms_dispatch 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f7e1ee3fc30] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7f7e1ee8d03c] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 3: raise() 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 4: abort() 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f7e1fd912b0] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 6: ceph-mon(+0x2a69bc) [0x55c53815f9bc] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 7: (Elector::persist_connectivity_scores()+0x135) [0x55c538243175] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 8: (ConnectionTracker::report_live_connection(int, double)+0x181) [0x55c53824c211] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 9: (Elector::handle_ping(boost::intrusive_ptr)+0x620) [0x55c5382485e0] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 10: (Elector::dispatch(boost::intrusive_ptr)+0xa7) [0x55c538249197] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 11: (Monitor::dispatch_op(boost::intrusive_ptr)+0xe4d) [0x55c5381b873d] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 12: (Monitor::_ms_dispatch(Message*)+0x786) [0x55c5381acec6] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 13: ceph-mon(+0x2b3b8c) [0x55c53816cb8c] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 14: (DispatchQueue::entry()+0x4a8) [0x7f7e20008518] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 15: /usr/lib64/ceph/libceph-common.so.2(+0x49cc11) [0x7f7e2009cc11] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 16: /lib64/libc.so.6(+0x8b2fa) [0x7f7e1ee8b2fa] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 17: /lib64/libc.so.6(+0x1103d0) [0x7f7e1ef103d0] 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-20T18:45:10.080 INFO:tasks.ceph.mon.c.vm00.stderr: 2026-03-20T18:45:10.255 INFO:tasks.ceph.mon.c.vm00.stderr:daemon-helper: command crashed with signal 6 2026-03-20T18:45:10.668 INFO:tasks.ceph.mon.a.vm00.stderr:2026-03-20T18:45:10.666+0000 7f820c089640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-a/store.db/000025.log: No space left on device code =  Rocksdb transaction: 2026-03-20T18:45:10.668 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = '1256' value size = 13438) 2026-03-20T18:45:10.668 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = 'pending_v' value size = 8) 2026-03-20T18:45:10.668 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = 'pending_pn' value size = 8) 2026-03-20T18:45:10.668 INFO:tasks.ceph.mon.a.vm00.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f820c089640 time 2026-03-20T18:45:10.667699+0000 2026-03-20T18:45:10.668 INFO:tasks.ceph.mon.a.vm00.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f8211f911f3] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 2: ceph-mon(+0x2a69bc) [0x563fb99a09bc] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 3: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x563fb9b24d8c] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 4: (Paxos::propose_pending()+0x11b) [0x563fb9b30dab] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (Paxos::trigger_propose()+0x118) [0x563fb9b311a8] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 6: (PaxosService::propose_pending()+0x24f) [0x563fb9b315bf] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 7: ceph-mon(+0x2a6c5d) [0x563fb99a0c5d] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (CommonSafeTimer::timer_thread()+0x130) [0x7f82120ddde0] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 9: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7f82120de841] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 10: /lib64/libc.so.6(+0x8b2fa) [0x7f821108b2fa] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 11: /lib64/libc.so.6(+0x1103d0) [0x7f82111103d0] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr:*** Caught signal (Aborted) ** 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: in thread 7f820c089640 thread_name:safe_timer 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f821103fc30] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7f821108d03c] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 3: raise() 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 4: abort() 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f8211f912b0] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 6: ceph-mon(+0x2a69bc) [0x563fb99a09bc] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 7: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x563fb9b24d8c] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (Paxos::propose_pending()+0x11b) [0x563fb9b30dab] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 9: (Paxos::trigger_propose()+0x118) [0x563fb9b311a8] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 10: (PaxosService::propose_pending()+0x24f) [0x563fb9b315bf] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 11: ceph-mon(+0x2a6c5d) [0x563fb99a0c5d] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 12: (CommonSafeTimer::timer_thread()+0x130) [0x7f82120ddde0] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 13: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7f82120de841] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 14: /lib64/libc.so.6(+0x8b2fa) [0x7f821108b2fa] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 15: /lib64/libc.so.6(+0x1103d0) [0x7f82111103d0] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr:2026-03-20T18:45:10.667+0000 7f820c089640 -1 /ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f820c089640 time 2026-03-20T18:45:10.667699+0000 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f8211f911f3] 2026-03-20T18:45:10.669 INFO:tasks.ceph.mon.a.vm00.stderr: 2: ceph-mon(+0x2a69bc) [0x563fb99a09bc] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 3: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x563fb9b24d8c] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 4: (Paxos::propose_pending()+0x11b) [0x563fb9b30dab] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (Paxos::trigger_propose()+0x118) [0x563fb9b311a8] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 6: (PaxosService::propose_pending()+0x24f) [0x563fb9b315bf] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 7: ceph-mon(+0x2a6c5d) [0x563fb99a0c5d] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (CommonSafeTimer::timer_thread()+0x130) [0x7f82120ddde0] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 9: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7f82120de841] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 10: /lib64/libc.so.6(+0x8b2fa) [0x7f821108b2fa] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 11: /lib64/libc.so.6(+0x1103d0) [0x7f82111103d0] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr:2026-03-20T18:45:10.668+0000 7f820c089640 -1 *** Caught signal (Aborted) ** 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: in thread 7f820c089640 thread_name:safe_timer 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f821103fc30] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7f821108d03c] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 3: raise() 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 4: abort() 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f8211f912b0] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 6: ceph-mon(+0x2a69bc) [0x563fb99a09bc] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 7: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x563fb9b24d8c] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (Paxos::propose_pending()+0x11b) [0x563fb9b30dab] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 9: (Paxos::trigger_propose()+0x118) [0x563fb9b311a8] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 10: (PaxosService::propose_pending()+0x24f) [0x563fb9b315bf] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 11: ceph-mon(+0x2a6c5d) [0x563fb99a0c5d] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 12: (CommonSafeTimer::timer_thread()+0x130) [0x7f82120ddde0] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 13: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7f82120de841] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 14: /lib64/libc.so.6(+0x8b2fa) [0x7f821108b2fa] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 15: /lib64/libc.so.6(+0x1103d0) [0x7f82111103d0] 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T18:45:10.670 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: -2> 2026-03-20T18:45:10.666+0000 7f820c089640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-a/store.db/000025.log: No space left on device code =  Rocksdb transaction: 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = '1256' value size = 13438) 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = 'pending_v' value size = 8) 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = 'pending_pn' value size = 8) 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: -1> 2026-03-20T18:45:10.667+0000 7f820c089640 -1 /ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f820c089640 time 2026-03-20T18:45:10.667699+0000 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f8211f911f3] 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: 2: ceph-mon(+0x2a69bc) [0x563fb99a09bc] 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: 3: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x563fb9b24d8c] 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: 4: (Paxos::propose_pending()+0x11b) [0x563fb9b30dab] 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (Paxos::trigger_propose()+0x118) [0x563fb9b311a8] 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: 6: (PaxosService::propose_pending()+0x24f) [0x563fb9b315bf] 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: 7: ceph-mon(+0x2a6c5d) [0x563fb99a0c5d] 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (CommonSafeTimer::timer_thread()+0x130) [0x7f82120ddde0] 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: 9: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7f82120de841] 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: 10: /lib64/libc.so.6(+0x8b2fa) [0x7f821108b2fa] 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: 11: /lib64/libc.so.6(+0x1103d0) [0x7f82111103d0] 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: 0> 2026-03-20T18:45:10.668+0000 7f820c089640 -1 *** Caught signal (Aborted) ** 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: in thread 7f820c089640 thread_name:safe_timer 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f821103fc30] 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7f821108d03c] 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: 3: raise() 2026-03-20T18:45:10.684 INFO:tasks.ceph.mon.a.vm00.stderr: 4: abort() 2026-03-20T18:45:10.685 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f8211f912b0] 2026-03-20T18:45:10.685 INFO:tasks.ceph.mon.a.vm00.stderr: 6: ceph-mon(+0x2a69bc) [0x563fb99a09bc] 2026-03-20T18:45:10.685 INFO:tasks.ceph.mon.a.vm00.stderr: 7: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x563fb9b24d8c] 2026-03-20T18:45:10.685 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (Paxos::propose_pending()+0x11b) [0x563fb9b30dab] 2026-03-20T18:45:10.685 INFO:tasks.ceph.mon.a.vm00.stderr: 9: (Paxos::trigger_propose()+0x118) [0x563fb9b311a8] 2026-03-20T18:45:10.685 INFO:tasks.ceph.mon.a.vm00.stderr: 10: (PaxosService::propose_pending()+0x24f) [0x563fb9b315bf] 2026-03-20T18:45:10.685 INFO:tasks.ceph.mon.a.vm00.stderr: 11: ceph-mon(+0x2a6c5d) [0x563fb99a0c5d] 2026-03-20T18:45:10.685 INFO:tasks.ceph.mon.a.vm00.stderr: 12: (CommonSafeTimer::timer_thread()+0x130) [0x7f82120ddde0] 2026-03-20T18:45:10.685 INFO:tasks.ceph.mon.a.vm00.stderr: 13: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7f82120de841] 2026-03-20T18:45:10.685 INFO:tasks.ceph.mon.a.vm00.stderr: 14: /lib64/libc.so.6(+0x8b2fa) [0x7f821108b2fa] 2026-03-20T18:45:10.685 INFO:tasks.ceph.mon.a.vm00.stderr: 15: /lib64/libc.so.6(+0x1103d0) [0x7f82111103d0] 2026-03-20T18:45:10.685 INFO:tasks.ceph.mon.a.vm00.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-20T18:45:10.685 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T18:45:10.685 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.685 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.685 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.685 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.685 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.687 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.687 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.687 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.687 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.687 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.687 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.687 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.687 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.687 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.687 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.687 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.688 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.689 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: -9999> 2026-03-20T18:45:10.666+0000 7f820c089640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-a/store.db/000025.log: No space left on device code =  Rocksdb transaction: 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = '1256' value size = 13438) 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = 'pending_v' value size = 8) 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr:PutCF( prefix = paxos key = 'pending_pn' value size = 8) 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: -9998> 2026-03-20T18:45:10.667+0000 7f820c089640 -1 /ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7f820c089640 time 2026-03-20T18:45:10.667699+0000 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7f8211f911f3] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 2: ceph-mon(+0x2a69bc) [0x563fb99a09bc] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 3: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x563fb9b24d8c] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 4: (Paxos::propose_pending()+0x11b) [0x563fb9b30dab] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (Paxos::trigger_propose()+0x118) [0x563fb9b311a8] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 6: (PaxosService::propose_pending()+0x24f) [0x563fb9b315bf] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 7: ceph-mon(+0x2a6c5d) [0x563fb99a0c5d] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (CommonSafeTimer::timer_thread()+0x130) [0x7f82120ddde0] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 9: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7f82120de841] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 10: /lib64/libc.so.6(+0x8b2fa) [0x7f821108b2fa] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 11: /lib64/libc.so.6(+0x1103d0) [0x7f82111103d0] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: -9997> 2026-03-20T18:45:10.668+0000 7f820c089640 -1 *** Caught signal (Aborted) ** 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: in thread 7f820c089640 thread_name:safe_timer 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7f821103fc30] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7f821108d03c] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 3: raise() 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 4: abort() 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7f8211f912b0] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 6: ceph-mon(+0x2a69bc) [0x563fb99a09bc] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 7: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x54c) [0x563fb9b24d8c] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 8: (Paxos::propose_pending()+0x11b) [0x563fb9b30dab] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 9: (Paxos::trigger_propose()+0x118) [0x563fb9b311a8] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 10: (PaxosService::propose_pending()+0x24f) [0x563fb9b315bf] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 11: ceph-mon(+0x2a6c5d) [0x563fb99a0c5d] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 12: (CommonSafeTimer::timer_thread()+0x130) [0x7f82120ddde0] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 13: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7f82120de841] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 14: /lib64/libc.so.6(+0x8b2fa) [0x7f821108b2fa] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 15: /lib64/libc.so.6(+0x1103d0) [0x7f82111103d0] 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-20T18:45:10.690 INFO:tasks.ceph.mon.a.vm00.stderr: 2026-03-20T18:45:10.881 INFO:tasks.ceph.mon.a.vm00.stderr:daemon-helper: command crashed with signal 6 2026-03-20T18:45:13.267 INFO:tasks.ceph.mon.b.vm02.stderr:2026-03-20T18:45:13.265+0000 7fcaed5a7640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-b/store.db/000025.log: No space left on device code =  Rocksdb transaction: 2026-03-20T18:45:13.267 INFO:tasks.ceph.mon.b.vm02.stderr:PutCF( prefix = monitor key = 'connectivity_scores' value size = 238) 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7fcaed5a7640 time 2026-03-20T18:45:13.267080+0000 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7fcaf33911f3] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 2: ceph-mon(+0x2a69bc) [0x5571302749bc] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 3: (Elector::persist_connectivity_scores()+0x135) [0x557130358175] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 4: (ConnectionTracker::report_dead_connection(int, double)+0x181) [0x5571303619d1] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 5: (Elector::dead_ping(int)+0x1a1) [0x5571303594a1] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 6: ceph-mon(+0x2a6c5d) [0x557130274c5d] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 7: (CommonSafeTimer::timer_thread()+0x130) [0x7fcaf34ddde0] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 8: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7fcaf34de841] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 9: /lib64/libc.so.6(+0x8b2fa) [0x7fcaf248b2fa] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 10: /lib64/libc.so.6(+0x1103d0) [0x7fcaf25103d0] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr:*** Caught signal (Aborted) ** 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: in thread 7fcaed5a7640 thread_name:safe_timer 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr:2026-03-20T18:45:13.266+0000 7fcaed5a7640 -1 /ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7fcaed5a7640 time 2026-03-20T18:45:13.267080+0000 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7fcaf33911f3] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 2: ceph-mon(+0x2a69bc) [0x5571302749bc] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 3: (Elector::persist_connectivity_scores()+0x135) [0x557130358175] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 4: (ConnectionTracker::report_dead_connection(int, double)+0x181) [0x5571303619d1] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 5: (Elector::dead_ping(int)+0x1a1) [0x5571303594a1] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 6: ceph-mon(+0x2a6c5d) [0x557130274c5d] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 7: (CommonSafeTimer::timer_thread()+0x130) [0x7fcaf34ddde0] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 8: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7fcaf34de841] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 9: /lib64/libc.so.6(+0x8b2fa) [0x7fcaf248b2fa] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 10: /lib64/libc.so.6(+0x1103d0) [0x7fcaf25103d0] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7fcaf243fc30] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7fcaf248d03c] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 3: raise() 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 4: abort() 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7fcaf33912b0] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 6: ceph-mon(+0x2a69bc) [0x5571302749bc] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 7: (Elector::persist_connectivity_scores()+0x135) [0x557130358175] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 8: (ConnectionTracker::report_dead_connection(int, double)+0x181) [0x5571303619d1] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 9: (Elector::dead_ping(int)+0x1a1) [0x5571303594a1] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 10: ceph-mon(+0x2a6c5d) [0x557130274c5d] 2026-03-20T18:45:13.268 INFO:tasks.ceph.mon.b.vm02.stderr: 11: (CommonSafeTimer::timer_thread()+0x130) [0x7fcaf34ddde0] 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: 12: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7fcaf34de841] 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: 13: /lib64/libc.so.6(+0x8b2fa) [0x7fcaf248b2fa] 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: 14: /lib64/libc.so.6(+0x1103d0) [0x7fcaf25103d0] 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr:2026-03-20T18:45:13.267+0000 7fcaed5a7640 -1 *** Caught signal (Aborted) ** 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: in thread 7fcaed5a7640 thread_name:safe_timer 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7fcaf243fc30] 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7fcaf248d03c] 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: 3: raise() 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: 4: abort() 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7fcaf33912b0] 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: 6: ceph-mon(+0x2a69bc) [0x5571302749bc] 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: 7: (Elector::persist_connectivity_scores()+0x135) [0x557130358175] 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: 8: (ConnectionTracker::report_dead_connection(int, double)+0x181) [0x5571303619d1] 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: 9: (Elector::dead_ping(int)+0x1a1) [0x5571303594a1] 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: 10: ceph-mon(+0x2a6c5d) [0x557130274c5d] 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: 11: (CommonSafeTimer::timer_thread()+0x130) [0x7fcaf34ddde0] 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: 12: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7fcaf34de841] 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: 13: /lib64/libc.so.6(+0x8b2fa) [0x7fcaf248b2fa] 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: 14: /lib64/libc.so.6(+0x1103d0) [0x7fcaf25103d0] 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr: 2026-03-20T18:45:13.269 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: -2> 2026-03-20T18:45:13.265+0000 7fcaed5a7640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-b/store.db/000025.log: No space left on device code =  Rocksdb transaction: 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr:PutCF( prefix = monitor key = 'connectivity_scores' value size = 238) 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: -1> 2026-03-20T18:45:13.266+0000 7fcaed5a7640 -1 /ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7fcaed5a7640 time 2026-03-20T18:45:13.267080+0000 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7fcaf33911f3] 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 2: ceph-mon(+0x2a69bc) [0x5571302749bc] 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 3: (Elector::persist_connectivity_scores()+0x135) [0x557130358175] 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 4: (ConnectionTracker::report_dead_connection(int, double)+0x181) [0x5571303619d1] 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 5: (Elector::dead_ping(int)+0x1a1) [0x5571303594a1] 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 6: ceph-mon(+0x2a6c5d) [0x557130274c5d] 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 7: (CommonSafeTimer::timer_thread()+0x130) [0x7fcaf34ddde0] 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 8: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7fcaf34de841] 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 9: /lib64/libc.so.6(+0x8b2fa) [0x7fcaf248b2fa] 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 10: /lib64/libc.so.6(+0x1103d0) [0x7fcaf25103d0] 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 0> 2026-03-20T18:45:13.267+0000 7fcaed5a7640 -1 *** Caught signal (Aborted) ** 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: in thread 7fcaed5a7640 thread_name:safe_timer 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7fcaf243fc30] 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7fcaf248d03c] 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 3: raise() 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 4: abort() 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7fcaf33912b0] 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 6: ceph-mon(+0x2a69bc) [0x5571302749bc] 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 7: (Elector::persist_connectivity_scores()+0x135) [0x557130358175] 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 8: (ConnectionTracker::report_dead_connection(int, double)+0x181) [0x5571303619d1] 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 9: (Elector::dead_ping(int)+0x1a1) [0x5571303594a1] 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 10: ceph-mon(+0x2a6c5d) [0x557130274c5d] 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 11: (CommonSafeTimer::timer_thread()+0x130) [0x7fcaf34ddde0] 2026-03-20T18:45:13.284 INFO:tasks.ceph.mon.b.vm02.stderr: 12: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7fcaf34de841] 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr: 13: /lib64/libc.so.6(+0x8b2fa) [0x7fcaf248b2fa] 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr: 14: /lib64/libc.so.6(+0x1103d0) [0x7fcaf25103d0] 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr: 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.285 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.286 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.287 INFO:tasks.ceph.mon.b.vm02.stderr:problem writing to /var/log/ceph/ceph-mon.b.log: (28) No space left on device 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: -9999> 2026-03-20T18:45:13.265+0000 7fcaed5a7640 -1 rocksdb: submit_common error: IO error: No space left on device: While open a file for appending: /var/lib/ceph/mon/ceph-b/store.db/000025.log: No space left on device code =  Rocksdb transaction: 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr:PutCF( prefix = monitor key = 'connectivity_scores' value size = 238) 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: -9998> 2026-03-20T18:45:13.266+0000 7fcaed5a7640 -1 /ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(TransactionRef)' thread 7fcaed5a7640 time 2026-03-20T18:45:13.267080+0000 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr:/ceph/rpmbuild/BUILD/ceph-20.2.0-712-g70f8415b/src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc9) [0x7fcaf33911f3] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 2: ceph-mon(+0x2a69bc) [0x5571302749bc] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 3: (Elector::persist_connectivity_scores()+0x135) [0x557130358175] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 4: (ConnectionTracker::report_dead_connection(int, double)+0x181) [0x5571303619d1] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 5: (Elector::dead_ping(int)+0x1a1) [0x5571303594a1] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 6: ceph-mon(+0x2a6c5d) [0x557130274c5d] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 7: (CommonSafeTimer::timer_thread()+0x130) [0x7fcaf34ddde0] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 8: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7fcaf34de841] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 9: /lib64/libc.so.6(+0x8b2fa) [0x7fcaf248b2fa] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 10: /lib64/libc.so.6(+0x1103d0) [0x7fcaf25103d0] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: -9997> 2026-03-20T18:45:13.267+0000 7fcaed5a7640 -1 *** Caught signal (Aborted) ** 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: in thread 7fcaed5a7640 thread_name:safe_timer 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - RelWithDebInfo) 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 1: /lib64/libc.so.6(+0x3fc30) [0x7fcaf243fc30] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 2: /lib64/libc.so.6(+0x8d03c) [0x7fcaf248d03c] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 3: raise() 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 4: abort() 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x186) [0x7fcaf33912b0] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 6: ceph-mon(+0x2a69bc) [0x5571302749bc] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 7: (Elector::persist_connectivity_scores()+0x135) [0x557130358175] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 8: (ConnectionTracker::report_dead_connection(int, double)+0x181) [0x5571303619d1] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 9: (Elector::dead_ping(int)+0x1a1) [0x5571303594a1] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 10: ceph-mon(+0x2a6c5d) [0x557130274c5d] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 11: (CommonSafeTimer::timer_thread()+0x130) [0x7fcaf34ddde0] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 12: /usr/lib64/ceph/libceph-common.so.2(+0x2de841) [0x7fcaf34de841] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 13: /lib64/libc.so.6(+0x8b2fa) [0x7fcaf248b2fa] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 14: /lib64/libc.so.6(+0x1103d0) [0x7fcaf25103d0] 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-20T18:45:13.288 INFO:tasks.ceph.mon.b.vm02.stderr: 2026-03-20T18:45:13.454 INFO:tasks.ceph.mon.b.vm02.stderr:daemon-helper: command crashed with signal 6 2026-03-20T18:45:13.993 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~0s 2026-03-20T18:45:13.993 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~0s 2026-03-20T18:45:13.993 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~0s 2026-03-20T18:45:20.305 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~6s 2026-03-20T18:45:20.306 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~6s 2026-03-20T18:45:20.306 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~6s 2026-03-20T18:45:26.617 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~13s 2026-03-20T18:45:26.617 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~13s 2026-03-20T18:45:26.617 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~13s 2026-03-20T18:45:32.924 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~19s 2026-03-20T18:45:32.924 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~19s 2026-03-20T18:45:32.924 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~19s 2026-03-20T18:45:39.234 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~25s 2026-03-20T18:45:39.234 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~25s 2026-03-20T18:45:39.234 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~25s 2026-03-20T18:45:45.541 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~32s 2026-03-20T18:45:45.541 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~32s 2026-03-20T18:45:45.541 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~32s 2026-03-20T18:45:51.846 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~38s 2026-03-20T18:45:51.847 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~38s 2026-03-20T18:45:51.847 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~38s 2026-03-20T18:45:58.153 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~44s 2026-03-20T18:45:58.153 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~44s 2026-03-20T18:45:58.153 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~44s 2026-03-20T18:46:04.460 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~50s 2026-03-20T18:46:04.460 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~50s 2026-03-20T18:46:04.460 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~50s 2026-03-20T18:46:10.766 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~57s 2026-03-20T18:46:10.766 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~57s 2026-03-20T18:46:10.766 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~57s 2026-03-20T18:46:17.077 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~63s 2026-03-20T18:46:17.077 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~63s 2026-03-20T18:46:17.077 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~63s 2026-03-20T18:46:23.388 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~69s 2026-03-20T18:46:23.388 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~69s 2026-03-20T18:46:23.388 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~69s 2026-03-20T18:46:29.695 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~76s 2026-03-20T18:46:29.695 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~76s 2026-03-20T18:46:29.695 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~76s 2026-03-20T18:46:36.005 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~82s 2026-03-20T18:46:36.005 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~82s 2026-03-20T18:46:36.005 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~82s 2026-03-20T18:46:39.895 INFO:tasks.rgw.client.1.vm02.stdout:problem writing to /var/log/ceph/rgw.ceph.client.1.log: tee: /var/log/ceph/rgw.ceph.client.1.stdout: No space left on device 2026-03-20T18:46:39.895 INFO:tasks.rgw.client.1.vm02.stdout:(28) No space left on device 2026-03-20T18:46:42.313 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~88s 2026-03-20T18:46:42.313 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~88s 2026-03-20T18:46:42.313 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~88s 2026-03-20T18:46:48.619 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~95s 2026-03-20T18:46:48.619 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~95s 2026-03-20T18:46:48.619 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~95s 2026-03-20T18:46:54.929 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~101s 2026-03-20T18:46:54.929 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~101s 2026-03-20T18:46:54.929 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~101s 2026-03-20T18:47:01.241 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~107s 2026-03-20T18:47:01.241 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~107s 2026-03-20T18:47:01.241 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~107s 2026-03-20T18:47:07.548 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~114s 2026-03-20T18:47:07.548 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~114s 2026-03-20T18:47:07.548 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~114s 2026-03-20T18:47:13.856 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~120s 2026-03-20T18:47:13.856 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~120s 2026-03-20T18:47:13.856 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~120s 2026-03-20T18:47:20.164 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~126s 2026-03-20T18:47:20.164 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~126s 2026-03-20T18:47:20.164 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~126s 2026-03-20T18:47:26.471 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~132s 2026-03-20T18:47:26.471 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~132s 2026-03-20T18:47:26.471 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~132s 2026-03-20T18:47:32.777 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~139s 2026-03-20T18:47:32.777 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~139s 2026-03-20T18:47:32.777 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~139s 2026-03-20T18:47:39.083 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~145s 2026-03-20T18:47:39.083 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~145s 2026-03-20T18:47:39.083 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~145s 2026-03-20T18:47:45.388 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~151s 2026-03-20T18:47:45.389 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~151s 2026-03-20T18:47:45.389 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~151s 2026-03-20T18:47:51.696 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~158s 2026-03-20T18:47:51.696 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~158s 2026-03-20T18:47:51.696 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~158s 2026-03-20T18:47:58.002 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~164s 2026-03-20T18:47:58.002 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~164s 2026-03-20T18:47:58.002 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~164s 2026-03-20T18:48:04.309 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~170s 2026-03-20T18:48:04.309 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~170s 2026-03-20T18:48:04.309 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~170s 2026-03-20T18:48:10.617 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~177s 2026-03-20T18:48:10.617 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~177s 2026-03-20T18:48:10.617 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~177s 2026-03-20T18:48:16.924 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~183s 2026-03-20T18:48:16.925 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~183s 2026-03-20T18:48:16.925 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~183s 2026-03-20T18:48:23.232 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~189s 2026-03-20T18:48:23.233 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~189s 2026-03-20T18:48:23.233 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~189s 2026-03-20T18:48:29.540 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~196s 2026-03-20T18:48:29.540 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~196s 2026-03-20T18:48:29.540 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~196s 2026-03-20T18:48:35.846 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~202s 2026-03-20T18:48:35.846 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~202s 2026-03-20T18:48:35.846 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~202s 2026-03-20T18:48:42.153 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~208s 2026-03-20T18:48:42.153 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~208s 2026-03-20T18:48:42.153 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~208s 2026-03-20T18:48:48.459 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~214s 2026-03-20T18:48:48.459 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~214s 2026-03-20T18:48:48.459 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~214s 2026-03-20T18:48:54.764 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~221s 2026-03-20T18:48:54.764 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~221s 2026-03-20T18:48:54.764 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~221s 2026-03-20T18:49:01.070 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~227s 2026-03-20T18:49:01.071 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~227s 2026-03-20T18:49:01.071 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~227s 2026-03-20T18:49:07.378 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~233s 2026-03-20T18:49:07.378 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~233s 2026-03-20T18:49:07.378 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~233s 2026-03-20T18:49:13.688 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~240s 2026-03-20T18:49:13.689 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~240s 2026-03-20T18:49:13.689 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~240s 2026-03-20T18:49:19.999 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~246s 2026-03-20T18:49:19.999 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~246s 2026-03-20T18:49:20.000 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~246s 2026-03-20T18:49:26.308 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~252s 2026-03-20T18:49:26.309 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~252s 2026-03-20T18:49:26.309 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~252s 2026-03-20T18:49:32.614 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~259s 2026-03-20T18:49:32.614 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~259s 2026-03-20T18:49:32.614 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~259s 2026-03-20T18:49:38.924 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~265s 2026-03-20T18:49:38.924 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~265s 2026-03-20T18:49:38.924 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~265s 2026-03-20T18:49:45.235 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~271s 2026-03-20T18:49:45.235 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~271s 2026-03-20T18:49:45.235 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~271s 2026-03-20T18:49:51.544 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~278s 2026-03-20T18:49:51.544 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~278s 2026-03-20T18:49:51.544 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~278s 2026-03-20T18:49:57.853 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~284s 2026-03-20T18:49:57.854 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~284s 2026-03-20T18:49:57.854 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~284s 2026-03-20T18:50:04.162 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~290s 2026-03-20T18:50:04.163 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~290s 2026-03-20T18:50:04.163 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~290s 2026-03-20T18:50:10.473 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~296s 2026-03-20T18:50:10.473 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~296s 2026-03-20T18:50:10.473 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~296s 2026-03-20T18:50:16.783 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~303s 2026-03-20T18:50:16.783 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.c is failed for ~303s 2026-03-20T18:50:16.783 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.b is failed for ~303s 2026-03-20T18:50:16.783 INFO:tasks.daemonwatchdog.daemon_watchdog:BARK! unmounting mounts and killing all daemons 2026-03-20T18:50:18.092 INFO:tasks.ceph.osd.0:Sent signal 15 2026-03-20T18:50:18.092 INFO:tasks.ceph.osd.1:Sent signal 15 2026-03-20T18:50:18.092 INFO:tasks.ceph.osd.2:Sent signal 15 2026-03-20T18:50:18.092 INFO:tasks.ceph.osd.3:Sent signal 15 2026-03-20T18:50:18.092 INFO:tasks.ceph.osd.4:Sent signal 15 2026-03-20T18:50:18.092 INFO:tasks.ceph.osd.5:Sent signal 15 2026-03-20T18:50:18.092 INFO:tasks.ceph.osd.6:Sent signal 15 2026-03-20T18:50:18.092 INFO:tasks.ceph.osd.7:Sent signal 15 2026-03-20T18:50:18.092 INFO:tasks.rgw.client.0:Sent signal 15 2026-03-20T18:50:18.093 INFO:tasks.rgw.client.1:Sent signal 15 2026-03-20T18:50:18.093 INFO:tasks.rgw.client.2:Sent signal 15 2026-03-20T18:50:18.093 INFO:tasks.ceph.mgr.y:Sent signal 15 2026-03-20T18:50:18.093 INFO:tasks.ceph.mgr.x:Sent signal 15 2026-03-20T18:50:18.093 INFO:tasks.ceph.osd.0.vm00.stderr:2026-03-20T18:50:18.091+0000 7fc55d9cd640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 0 (PID: 58753) UID: 0 2026-03-20T18:50:18.093 INFO:tasks.ceph.osd.0.vm00.stderr:2026-03-20T18:50:18.091+0000 7fc55d9cd640 -1 osd.0 73 *** Got signal Terminated *** 2026-03-20T18:50:18.093 INFO:tasks.ceph.osd.0.vm00.stderr:2026-03-20T18:50:18.091+0000 7fc55d9cd640 -1 osd.0 73 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-20T18:50:18.093 INFO:tasks.ceph.osd.1.vm00.stderr:2026-03-20T18:50:18.091+0000 7fdebd9a9640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 1 (PID: 58740) UID: 0 2026-03-20T18:50:18.093 INFO:tasks.ceph.osd.1.vm00.stderr:2026-03-20T18:50:18.091+0000 7fdebd9a9640 -1 osd.1 73 *** Got signal Terminated *** 2026-03-20T18:50:18.093 INFO:tasks.ceph.osd.1.vm00.stderr:2026-03-20T18:50:18.091+0000 7fdebd9a9640 -1 osd.1 73 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-20T18:50:18.093 INFO:tasks.ceph.osd.2.vm00.stderr:2026-03-20T18:50:18.092+0000 7f80acd65640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 2 (PID: 58758) UID: 0 2026-03-20T18:50:18.093 INFO:tasks.ceph.osd.2.vm00.stderr:2026-03-20T18:50:18.092+0000 7f80acd65640 -1 osd.2 73 *** Got signal Terminated *** 2026-03-20T18:50:18.093 INFO:tasks.ceph.osd.2.vm00.stderr:2026-03-20T18:50:18.092+0000 7f80acd65640 -1 osd.2 73 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-20T18:50:18.093 INFO:tasks.ceph.osd.3.vm00.stderr:2026-03-20T18:50:18.092+0000 7f8b7fea3640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 3 (PID: 58750) UID: 0 2026-03-20T18:50:18.094 INFO:tasks.ceph.osd.3.vm00.stderr:2026-03-20T18:50:18.092+0000 7f8b7fea3640 -1 osd.3 73 *** Got signal Terminated *** 2026-03-20T18:50:18.094 INFO:tasks.ceph.osd.3.vm00.stderr:2026-03-20T18:50:18.092+0000 7f8b7fea3640 -1 osd.3 73 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-20T18:50:18.094 INFO:tasks.rgw.client.0.vm00.stdout:2026-03-20T18:50:18.092+0000 7fd2ebe44640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper term radosgw --rgw-frontends beast port=80 -n client.0 --cluster ceph -k /etc/ceph/ceph.client.0.keyring --log-file /var/log/ceph/rgw.ceph.client.0.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.0.sock --foreground (PID: 64581) UID: 0 2026-03-20T18:50:18.094 INFO:tasks.rgw.client.0.vm00.stdout:2026-03-20T18:50:18.092+0000 7fd2f172c980 -1 shutting down 2026-03-20T18:50:18.094 INFO:tasks.rgw.client.2.vm05.stdout:2026-03-20T18:50:18.094+0000 7fee168dc640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper term radosgw --rgw-frontends beast port=80 -n client.2 --cluster ceph -k /etc/ceph/ceph.client.2.keyring --log-file /var/log/ceph/rgw.ceph.client.2.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.2.sock --foreground (PID: 50927) UID: 0 2026-03-20T18:50:18.094 INFO:tasks.rgw.client.2.vm05.stdout:2026-03-20T18:50:18.094+0000 7fee1c324980 -1 shutting down 2026-03-20T18:50:18.094 INFO:tasks.ceph.osd.4.vm02.stderr:2026-03-20T18:50:18.092+0000 7fa4320fc640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 4 (PID: 57312) UID: 0 2026-03-20T18:50:18.094 INFO:tasks.ceph.osd.4.vm02.stderr:2026-03-20T18:50:18.092+0000 7fa4320fc640 -1 osd.4 73 *** Got signal Terminated *** 2026-03-20T18:50:18.094 INFO:tasks.ceph.osd.4.vm02.stderr:2026-03-20T18:50:18.092+0000 7fa4320fc640 -1 osd.4 73 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-20T18:50:18.095 INFO:tasks.ceph.osd.5.vm02.stderr:2026-03-20T18:50:18.092+0000 7fcd3d3d6640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 5 (PID: 57311) UID: 0 2026-03-20T18:50:18.095 INFO:tasks.ceph.osd.5.vm02.stderr:2026-03-20T18:50:18.092+0000 7fcd3d3d6640 -1 osd.5 73 *** Got signal Terminated *** 2026-03-20T18:50:18.095 INFO:tasks.ceph.osd.5.vm02.stderr:2026-03-20T18:50:18.092+0000 7fcd3d3d6640 -1 osd.5 73 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-20T18:50:18.095 INFO:tasks.ceph.osd.6.vm02.stderr:2026-03-20T18:50:18.092+0000 7fb8052f8640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 6 (PID: 57310) UID: 0 2026-03-20T18:50:18.095 INFO:tasks.ceph.osd.6.vm02.stderr:2026-03-20T18:50:18.092+0000 7fb8052f8640 -1 osd.6 73 *** Got signal Terminated *** 2026-03-20T18:50:18.095 INFO:tasks.ceph.osd.6.vm02.stderr:2026-03-20T18:50:18.092+0000 7fb8052f8640 -1 osd.6 73 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-20T18:50:18.095 INFO:tasks.ceph.osd.7.vm02.stderr:2026-03-20T18:50:18.092+0000 7f01a7082640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 7 (PID: 57316) UID: 0 2026-03-20T18:50:18.095 INFO:tasks.ceph.osd.7.vm02.stderr:2026-03-20T18:50:18.092+0000 7f01a7082640 -1 osd.7 73 *** Got signal Terminated *** 2026-03-20T18:50:18.095 INFO:tasks.ceph.osd.7.vm02.stderr:2026-03-20T18:50:18.092+0000 7f01a7082640 -1 osd.7 73 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-20T18:50:18.095 INFO:tasks.rgw.client.1.vm02.stdout:2026-03-20T18:50:18.092+0000 7f368d691640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper term radosgw --rgw-frontends beast port=80 -n client.1 --cluster ceph -k /etc/ceph/ceph.client.1.keyring --log-file /var/log/ceph/rgw.ceph.client.1.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.1.sock --foreground (PID: 61487) UID: 0 2026-03-20T18:50:18.095 INFO:tasks.rgw.client.1.vm02.stdout:2026-03-20T18:50:18.092+0000 7f3691131980 -1 shutting down 2026-03-20T18:50:18.294 INFO:tasks.ceph.mgr.y.vm00.stderr:daemon-helper: command crashed with signal 15 2026-03-20T18:54:46.371 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_large_scale 2026-03-20T18:54:46.371 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------- live log call --------------------------------- 2026-03-20T18:54:46.371 INFO:teuthology.orchestra.run.vm00.stdout:WARNING dedup.test_dedup:test_dedup.py:2748 test_dedup_dry_large_scale: failed!! 2026-03-20T18:54:53.503 INFO:teuthology.orchestra.run.vm00.stdout:FAILED [ 97%] 2026-03-20T18:54:53.506 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_cleanup PASSED [100%] 2026-03-20T18:54:53.506 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.506 INFO:teuthology.orchestra.run.vm00.stdout:=================================== FAILURES =================================== 2026-03-20T18:54:53.506 INFO:teuthology.orchestra.run.vm00.stdout:__________________________ test_dedup_dry_large_scale __________________________ 2026-03-20T18:54:53.506 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.506 INFO:teuthology.orchestra.run.vm00.stdout:self = 2026-03-20T18:54:53.506 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.506 INFO:teuthology.orchestra.run.vm00.stdout: def _new_conn(self): 2026-03-20T18:54:53.506 INFO:teuthology.orchestra.run.vm00.stdout: """Establish a socket connection and set nodelay settings on it. 2026-03-20T18:54:53.506 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.507 INFO:teuthology.orchestra.run.vm00.stdout: :return: New socket connection. 2026-03-20T18:54:53.507 INFO:teuthology.orchestra.run.vm00.stdout: """ 2026-03-20T18:54:53.507 INFO:teuthology.orchestra.run.vm00.stdout: extra_kw = {} 2026-03-20T18:54:53.507 INFO:teuthology.orchestra.run.vm00.stdout: if self.source_address: 2026-03-20T18:54:53.507 INFO:teuthology.orchestra.run.vm00.stdout: extra_kw["source_address"] = self.source_address 2026-03-20T18:54:53.507 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.507 INFO:teuthology.orchestra.run.vm00.stdout: if self.socket_options: 2026-03-20T18:54:53.507 INFO:teuthology.orchestra.run.vm00.stdout: extra_kw["socket_options"] = self.socket_options 2026-03-20T18:54:53.507 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.507 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-03-20T18:54:53.507 INFO:teuthology.orchestra.run.vm00.stdout:> conn = connection.create_connection( 2026-03-20T18:54:53.507 INFO:teuthology.orchestra.run.vm00.stdout: (self._dns_host, self.port), self.timeout, **extra_kw 2026-03-20T18:54:53.507 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-03-20T18:54:53.507 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.507 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/connection.py:174: 2026-03-20T18:54:53.507 INFO:teuthology.orchestra.run.vm00.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2026-03-20T18:54:53.507 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/util/connection.py:95: in create_connection 2026-03-20T18:54:53.507 INFO:teuthology.orchestra.run.vm00.stdout: raise err 2026-03-20T18:54:53.507 INFO:teuthology.orchestra.run.vm00.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout:address = ('vm00.local', 80), timeout = 60, source_address = None 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout:socket_options = [(6, 1, 1)] 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: def create_connection( 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: address, 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: timeout=socket._GLOBAL_DEFAULT_TIMEOUT, 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: source_address=None, 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: socket_options=None, 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: ): 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: """Connect to *address* and return the socket object. 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: Convenience function. Connect to *address* (a 2-tuple ``(host, 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: port)``) and return the socket object. Passing the optional 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: *timeout* parameter will set the timeout on the socket instance 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: before attempting to connect. If no *timeout* is supplied, the 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: global default timeout setting returned by :func:`socket.getdefaulttimeout` 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: is used. If *source_address* is set it must be a tuple of (host, port) 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: for the socket to bind as a source address before making the connection. 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: An host of '' or port 0 tells the OS to use the default. 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: """ 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: host, port = address 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: if host.startswith("["): 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: host = host.strip("[]") 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: err = None 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: # Using the value from allowed_gai_family() in the context of getaddrinfo lets 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: # us select whether to work with IPv4 DNS records, IPv6 records, or both. 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: # The original create_connection function always returns all records. 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: family = allowed_gai_family() 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: host.encode("idna") 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: except UnicodeError: 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: return six.raise_from( 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: LocationParseError(u"'%s', label empty or too long" % host), None 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: af, socktype, proto, canonname, sa = res 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: sock = None 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: sock = socket.socket(af, socktype, proto) 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: # If provided, set socket level options before connecting. 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: _set_socket_options(sock, socket_options) 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: sock.settimeout(timeout) 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: if source_address: 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout: sock.bind(source_address) 2026-03-20T18:54:53.508 INFO:teuthology.orchestra.run.vm00.stdout:> sock.connect(sa) 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout:E ConnectionRefusedError: [Errno 111] Connection refused 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/util/connection.py:85: ConnectionRefusedError 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout:During handling of the above exception, another exception occurred: 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout:self = 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout:request = 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: def send(self, request): 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: proxy_url = self._proxy_config.proxy_url_for(request.url) 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: manager = self._get_connection_manager(request.url, proxy_url) 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: conn = manager.connection_from_url(request.url) 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: self._setup_ssl_cert(conn, request.url, self._verify) 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: if ensure_boolean( 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: os.environ.get('BOTO_EXPERIMENTAL__ADD_PROXY_HOST_HEADER', '') 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: ): 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: # This is currently an "experimental" feature which provides 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: # no guarantees of backwards compatibility. It may be subject 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: # to change or removal in any patch version. Anyone opting in 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: # to this feature should strictly pin botocore. 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: host = urlparse(request.url).hostname 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: conn.proxy_headers['host'] = host 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: request_target = self._get_request_target(request.url, proxy_url) 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout:> urllib_response = conn.urlopen( 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: method=request.method, 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: url=request_target, 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: body=request.body, 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: headers=request.headers, 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: retries=Retry(False), 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: assert_same_host=False, 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: preload_content=False, 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: decode_content=False, 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: chunked=self._chunked(request.headers), 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/httpsession.py:477: 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/connectionpool.py:802: in urlopen 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: retries = retries.increment( 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/util/retry.py:527: in increment 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: raise six.reraise(type(error), error, _stacktrace) 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/packages/six.py:770: in reraise 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: raise value 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/connectionpool.py:716: in urlopen 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: httplib_response = self._make_request( 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/connectionpool.py:416: in _make_request 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: conn.request(method, url, **httplib_request_kw) 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/awsrequest.py:96: in request 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: rval = super().request(method, url, body, headers, *args, **kwargs) 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/connection.py:244: in request 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: super(HTTPConnection, self).request(method, url, body=body, headers=headers) 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout:/usr/lib64/python3.9/http/client.py:1285: in request 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: self._send_request(method, url, body, headers, encode_chunked) 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout:/usr/lib64/python3.9/http/client.py:1331: in _send_request 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: self.endheaders(body, encode_chunked=encode_chunked) 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout:/usr/lib64/python3.9/http/client.py:1280: in endheaders 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout: self._send_output(message_body, encode_chunked=encode_chunked) 2026-03-20T18:54:53.509 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/awsrequest.py:123: in _send_output 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: self.send(msg) 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/awsrequest.py:223: in send 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: return super().send(str) 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout:/usr/lib64/python3.9/http/client.py:980: in send 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: self.connect() 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/connection.py:205: in connect 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: conn = self._new_conn() 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout:self = 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: def _new_conn(self): 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: """Establish a socket connection and set nodelay settings on it. 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: :return: New socket connection. 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: """ 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: extra_kw = {} 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: if self.source_address: 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: extra_kw["source_address"] = self.source_address 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: if self.socket_options: 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: extra_kw["socket_options"] = self.socket_options 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: conn = connection.create_connection( 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: (self._dns_host, self.port), self.timeout, **extra_kw 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: except SocketTimeout: 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: raise ConnectTimeoutError( 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: self, 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: "Connection to %s timed out. (connect timeout=%s)" 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: % (self.host, self.timeout), 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: except SocketError as e: 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout:> raise NewConnectionError( 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: self, "Failed to establish a new connection: %s" % e 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout:E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/urllib3/connection.py:186: NewConnectionError 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout:During handling of the above exception, another exception occurred: 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: @pytest.mark.basic_test 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: def test_dedup_dry_large_scale(): 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: #return 2026-03-20T18:54:53.510 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: prepare_test() 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: max_copies_count=3 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: num_threads=64 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: num_files=32*1024 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: size=1*KB 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: files=[] 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: config=TransferConfig(multipart_threshold=size, multipart_chunksize=1*MB) 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: log.debug("test_dedup_dry_large_scale_new: connect to AWS ...") 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: gen_files_fixed_size(files, num_files, size, max_copies_count) 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: conns=get_connections(num_threads) 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: bucket_names=get_buckets(num_threads) 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: for i in range(num_threads): 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: conns[i].create_bucket(Bucket=bucket_names[i]) 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: threads_simple_dedup_with_tenants(files, conns, bucket_names, config, True) 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: except: 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: log.warning("test_dedup_dry_large_scale: failed!!") 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: finally: 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: # cleanup must be executed even after a failure 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:> cleanup_all_buckets(bucket_names, conns) 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py:2751: 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py:496: in cleanup_all_buckets 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: delete_bucket_with_all_objects(bucket_name, conn) 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py:452: in delete_bucket_with_all_objects 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: listing=conn.list_objects(Bucket=bucket_name, Marker=marker, MaxKeys=max_keys) 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/client.py:602: in _api_call 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: return self._make_api_call(operation_name, kwargs) 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/context.py:123: in wrapper 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: return func(*args, **kwargs) 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/client.py:1060: in _make_api_call 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: http, parsed_response = self._make_request( 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/client.py:1084: in _make_request 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: return self._endpoint.make_request(operation_model, request_dict) 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/endpoint.py:119: in make_request 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: return self._send_request(request_dict, operation_model) 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/endpoint.py:200: in _send_request 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: while self._needs_retry( 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/endpoint.py:360: in _needs_retry 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: responses = self._event_emitter.emit( 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/hooks.py:412: in emit 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: return self._emitter.emit(aliased_event_name, **kwargs) 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/hooks.py:256: in emit 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: return self._emit(event_name, kwargs) 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/hooks.py:239: in _emit 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: response = handler(**kwargs) 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/retryhandler.py:207: in __call__ 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: if self._checker(**checker_kwargs): 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/retryhandler.py:284: in __call__ 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: should_retry = self._should_retry( 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/retryhandler.py:320: in _should_retry 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: return self._checker(attempt_number, response, caught_exception) 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/retryhandler.py:363: in __call__ 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: checker_response = checker( 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/retryhandler.py:247: in __call__ 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: return self._check_caught_exception( 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/retryhandler.py:416: in _check_caught_exception 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: raise caught_exception 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/endpoint.py:279: in _do_get_response 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout: http_response = self._send(request) 2026-03-20T18:54:53.511 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/endpoint.py:383: in _send 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: return self.http_session.send(request) 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout:self = 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout:request = 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: def send(self, request): 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: try: 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: proxy_url = self._proxy_config.proxy_url_for(request.url) 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: manager = self._get_connection_manager(request.url, proxy_url) 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: conn = manager.connection_from_url(request.url) 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: self._setup_ssl_cert(conn, request.url, self._verify) 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: if ensure_boolean( 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: os.environ.get('BOTO_EXPERIMENTAL__ADD_PROXY_HOST_HEADER', '') 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: ): 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: # This is currently an "experimental" feature which provides 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: # no guarantees of backwards compatibility. It may be subject 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: # to change or removal in any patch version. Anyone opting in 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: # to this feature should strictly pin botocore. 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: host = urlparse(request.url).hostname 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: conn.proxy_headers['host'] = host 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: request_target = self._get_request_target(request.url, proxy_url) 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: urllib_response = conn.urlopen( 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: method=request.method, 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: url=request_target, 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: body=request.body, 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: headers=request.headers, 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: retries=Retry(False), 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: assert_same_host=False, 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: preload_content=False, 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: decode_content=False, 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: chunked=self._chunked(request.headers), 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: http_response = botocore.awsrequest.AWSResponse( 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: request.url, 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: urllib_response.status, 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: urllib_response.headers, 2026-03-20T18:54:53.512 INFO:teuthology.orchestra.run.vm00.stdout: urllib_response, 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout: ) 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout: if not request.stream_output: 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout: # Cause the raw stream to be exhausted immediately. We do it 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout: # this way instead of using preload_content because 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout: # preload_content will never buffer chunked responses 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout: http_response.content 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout: return http_response 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout: except URLLib3SSLError as e: 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout: raise SSLError(endpoint_url=request.url, error=e) 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout: except (NewConnectionError, socket.gaierror) as e: 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:> raise EndpointConnectionError(endpoint_url=request.url, error=e) 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:E botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "http://vm00.local:80/qsglljryjwxvuhil-86?marker=&max-keys=1000&encoding-type=url" 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:.tox/py/lib/python3.9/site-packages/botocore/httpsession.py:506: EndpointConnectionError 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:----------------------------- Captured stderr call ----------------------------- 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:ignoring --setuser ceph since I am not root 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:ignoring --setgroup ceph since I am not root 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:ignoring --setuser ceph since I am not root 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:ignoring --setgroup ceph since I am not root 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:ignoring --setuser ceph since I am not root 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:ignoring --setgroup ceph since I am not root 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:failed to fetch mon config (--no-mon-config to skip) 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:------------------------------ Captured log call ------------------------------- 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:WARNING dedup.test_dedup:test_dedup.py:2748 test_dedup_dry_large_scale: failed!! 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:=============================== warnings summary =============================== 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small_with_tenants 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_multipart 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small_large_mix 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_basic_with_tenants 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_multipart_with_tenants 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_small_multipart_with_tenants 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_large_scale_with_tenants 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:test_dedup.py::test_dedup_dry_large_scale 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout: /home/ubuntu/cephtest/ceph/src/test/rgw/dedup/.tox/py/lib/python3.9/site-packages/boto3/compat.py:89: PythonDeprecationWarning: Boto3 will no longer support Python 3.9 starting April 29, 2026. To continue receiving service updates, bug fixes, and security updates please upgrade to Python 3.10 or later. More information can be found here: https://aws.amazon.com/blogs/developer/python-support-policy-updates-for-aws-sdks-and-tools/ 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout: warnings.warn(warning, PythonDeprecationWarning) 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:=========================== short test summary info ============================ 2026-03-20T18:54:53.513 INFO:teuthology.orchestra.run.vm00.stdout:FAILED test_dedup.py::test_dedup_dry_large_scale - botocore.exceptions.Endpoi... 2026-03-20T18:54:53.835 INFO:teuthology.orchestra.run.vm00.stdout:============ 1 failed, 33 passed, 8 warnings in 1607.19s (0:26:47) ============= 2026-03-20T18:54:54.066 INFO:teuthology.orchestra.run.vm00.stdout:py: exit 1 (1607.84 seconds) /home/ubuntu/cephtest/ceph/src/test/rgw/dedup> pytest -v -m 'basic_test or request_test or example_test' pid=65563 2026-03-20T18:54:54.068 INFO:teuthology.orchestra.run.vm00.stdout: py: FAIL code 1 (1610.76=setup[2.92]+cmd[1607.84] seconds) 2026-03-20T18:54:54.068 INFO:teuthology.orchestra.run.vm00.stdout: evaluation failed :( (1610.77 seconds) 2026-03-20T18:54:54.105 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T18:54:54.105 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/contextutil.py", line 30, in nested vars.append(enter()) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 135, in __enter__ return next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/dedup_tests.py", line 191, in run_tests toxvenv_sh(ctx, remote, args, label="dedup tests against rgw") File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/dedup_tests.py", line 165, in toxvenv_sh return remote.sh(['source', activate, run.Raw('&&')] + args, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 97, in sh proc = self.run(**kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed (dedup tests against rgw) on vm00 with status 1: "source /home/ubuntu/cephtest/tox-venv/bin/activate && cd /home/ubuntu/cephtest/ceph/src/test/rgw/dedup/ && DEDUPTESTS_CONF=./deduptests.client.0.conf tox -- -v -m 'basic_test or request_test or example_test'" 2026-03-20T18:54:54.106 INFO:tasks.dedup_tests:Removing dedup-tests.conf file... 2026-03-20T18:54:54.106 DEBUG:teuthology.orchestra.run.vm00:> rm -f /home/ubuntu/cephtest/ceph/src/test/rgw/dedup/deduptests.client.0.conf 2026-03-20T18:54:54.127 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph 2026-03-20T18:54:54.202 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-03-20T18:54:54.202 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T18:59:54.204 INFO:teuthology.orchestra.run.vm00.stderr:failed to fetch mon config (--no-mon-config to skip) 2026-03-20T18:59:54.206 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T18:59:54.207 INFO:tasks.dedup_tests:Removing dedup-tests... 2026-03-20T18:59:54.207 DEBUG:teuthology.orchestra.run.vm00:> rm -rf /home/ubuntu/cephtest/ceph 2026-03-20T18:59:54.754 ERROR:teuthology.run_tasks:Saw exception from tasks. Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/dedup_tests.py", line 107, in create_users yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 30, in nested vars.append(enter()) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 135, in __enter__ return next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/dedup_tests.py", line 191, in run_tests toxvenv_sh(ctx, remote, args, label="dedup tests against rgw") File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/dedup_tests.py", line 165, in toxvenv_sh return remote.sh(['source', activate, run.Raw('&&')] + args, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 97, in sh proc = self.run(**kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed (dedup tests against rgw) on vm00 with status 1: "source /home/ubuntu/cephtest/tox-venv/bin/activate && cd /home/ubuntu/cephtest/ceph/src/test/rgw/dedup/ && DEDUPTESTS_CONF=./deduptests.client.0.conf tox -- -v -m 'basic_test or request_test or example_test'" During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 112, in run_tasks manager.__enter__() File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 135, in __enter__ return next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/dedup_tests.py", line 240, in task with contextutil.nested( File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 135, in __enter__ return next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/dedup_tests.py", line 45, in download yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/dedup_tests.py", line 114, in create_users ctx.cluster.only(client).run( File "/home/teuthos/teuthology/teuthology/orchestra/cluster.py", line 85, in run procs = [remote.run(**kwargs, wait=_wait) for remote in remotes] File "/home/teuthos/teuthology/teuthology/orchestra/cluster.py", line 85, in procs = [remote.run(**kwargs, wait=_wait) for remote in remotes] File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph' 2026-03-20T18:59:54.754 DEBUG:teuthology.run_tasks:Unwinding manager dedup-tests 2026-03-20T18:59:54.757 DEBUG:teuthology.run_tasks:Unwinding manager tox 2026-03-20T18:59:54.759 DEBUG:teuthology.orchestra.run.vm00:> rm -rf /home/ubuntu/cephtest/tox-venv 2026-03-20T18:59:54.823 DEBUG:teuthology.run_tasks:Unwinding manager tox 2026-03-20T18:59:54.825 DEBUG:teuthology.orchestra.run.vm00:> rm -rf /home/ubuntu/cephtest/tox-venv 2026-03-20T18:59:54.838 DEBUG:teuthology.run_tasks:Unwinding manager rgw 2026-03-20T18:59:54.840 DEBUG:tasks.rgw.client.0:waiting for process to exit 2026-03-20T18:59:54.840 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T18:59:54.840 INFO:tasks.rgw.client.0:Stopped 2026-03-20T18:59:54.840 DEBUG:teuthology.orchestra.run.vm00:> rm -f /home/ubuntu/cephtest/rgw.opslog.ceph.client.0.sock 2026-03-20T18:59:54.891 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/ceph/vault-root-token 2026-03-20T18:59:54.959 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /home/ubuntu/cephtest/url_file 2026-03-20T18:59:55.022 INFO:tasks.util.rgw:rgwadmin: client.0 : ['gc', 'process', '--include-all'] 2026-03-20T18:59:55.022 DEBUG:tasks.util.rgw:rgwadmin: cmd=['adjust-ulimits', 'ceph-coverage', '/home/ubuntu/cephtest/archive/coverage', 'radosgw-admin', '--log-to-stderr', '--format', 'json', '-n', 'client.0', '--cluster', 'ceph', 'gc', 'process', '--include-all'] 2026-03-20T18:59:55.022 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all 2026-03-20T18:59:55.096 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setuser ceph since I am not root 2026-03-20T18:59:55.096 INFO:teuthology.orchestra.run.vm00.stderr:ignoring --setgroup ceph since I am not root 2026-03-20T19:04:55.097 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T19:04:55.096+0000 7f4d89820900 0 monclient(hunting): authenticate timed out after 300 2026-03-20T19:04:55.097 INFO:teuthology.orchestra.run.vm00.stderr:failed to fetch mon config (--no-mon-config to skip) 2026-03-20T19:04:55.099 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T19:04:55.099 ERROR:teuthology.run_tasks:Manager failed: rgw Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' 2026-03-20T19:04:55.099 DEBUG:teuthology.run_tasks:Unwinding manager openssl_keys 2026-03-20T19:04:55.102 DEBUG:teuthology.run_tasks:Unwinding manager ceph 2026-03-20T19:04:55.104 INFO:tasks.ceph.ceph_manager.ceph:waiting for clean 2026-03-20T19:04:55.104 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-03-20T19:06:55.169 DEBUG:teuthology.orchestra.run:got remote process result: 124 2026-03-20T19:06:55.169 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph.py", line 2001, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph.py", line 2011, in task ctx.managers[config['cluster']].wait_for_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 2919, in wait_for_clean num_active_clean = self.get_num_active_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 2698, in get_num_active_clean pgs = self.get_pg_stats() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 2464, in get_pg_stats out = self.raw_cluster_cmd('pg', 'dump', '--format=json') File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 1696, in raw_cluster_cmd return self.run_cluster_cmd(**kwargs).stdout.getvalue() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 1687, in run_cluster_cmd return self.controller.run(**kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' 2026-03-20T19:06:55.170 INFO:teuthology.misc:Shutting down mds daemons... 2026-03-20T19:06:55.170 INFO:teuthology.misc:Shutting down osd daemons... 2026-03-20T19:06:55.170 DEBUG:tasks.ceph.osd.0:waiting for process to exit 2026-03-20T19:06:55.170 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T19:06:55.170 INFO:tasks.ceph.osd.0:Stopped 2026-03-20T19:06:55.170 DEBUG:tasks.ceph.osd.1:waiting for process to exit 2026-03-20T19:06:55.170 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T19:06:55.170 INFO:tasks.ceph.osd.1:Stopped 2026-03-20T19:06:55.170 DEBUG:tasks.ceph.osd.2:waiting for process to exit 2026-03-20T19:06:55.170 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T19:06:55.170 INFO:tasks.ceph.osd.2:Stopped 2026-03-20T19:06:55.170 DEBUG:tasks.ceph.osd.3:waiting for process to exit 2026-03-20T19:06:55.170 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T19:06:55.170 INFO:tasks.ceph.osd.3:Stopped 2026-03-20T19:06:55.170 DEBUG:tasks.ceph.osd.4:waiting for process to exit 2026-03-20T19:06:55.170 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T19:06:55.170 INFO:tasks.ceph.osd.4:Stopped 2026-03-20T19:06:55.170 DEBUG:tasks.ceph.osd.5:waiting for process to exit 2026-03-20T19:06:55.170 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T19:06:55.170 INFO:tasks.ceph.osd.5:Stopped 2026-03-20T19:06:55.170 DEBUG:tasks.ceph.osd.6:waiting for process to exit 2026-03-20T19:06:55.170 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T19:06:55.170 INFO:tasks.ceph.osd.6:Stopped 2026-03-20T19:06:55.170 DEBUG:tasks.ceph.osd.7:waiting for process to exit 2026-03-20T19:06:55.170 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T19:06:55.170 INFO:tasks.ceph.osd.7:Stopped 2026-03-20T19:06:55.170 INFO:teuthology.misc:Shutting down mgr daemons... 2026-03-20T19:06:55.170 DEBUG:tasks.ceph.mgr.y:waiting for process to exit 2026-03-20T19:06:55.170 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T19:06:55.170 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T19:06:55.170 ERROR:teuthology.orchestra.daemon.state:Error while waiting for process to exit Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph.py", line 2001, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph.py", line 1526, in run_daemon yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph.py", line 2011, in task ctx.managers[config['cluster']].wait_for_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 2919, in wait_for_clean num_active_clean = self.get_num_active_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 2698, in get_num_active_clean pgs = self.get_pg_stats() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 2464, in get_pg_stats out = self.raw_cluster_cmd('pg', 'dump', '--format=json') File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 1696, in raw_cluster_cmd return self.run_cluster_cmd(**kwargs).stdout.getvalue() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 1687, in run_cluster_cmd return self.controller.run(**kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/orchestra/daemon/state.py", line 146, in stop run.wait([self.proc], timeout=timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mgr -f --cluster ceph -i y' 2026-03-20T19:06:55.171 INFO:tasks.ceph.mgr.y:Stopped 2026-03-20T19:06:55.171 DEBUG:tasks.ceph.mgr.x:waiting for process to exit 2026-03-20T19:06:55.171 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T19:06:55.171 INFO:tasks.ceph.mgr.x:Stopped 2026-03-20T19:06:55.171 INFO:teuthology.misc:Shutting down mon daemons... 2026-03-20T19:06:55.171 DEBUG:tasks.ceph.mon.a:waiting for process to exit 2026-03-20T19:06:55.171 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T19:06:55.171 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T19:06:55.171 ERROR:teuthology.orchestra.daemon.state:Error while waiting for process to exit Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph.py", line 2001, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph.py", line 1526, in run_daemon yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph.py", line 2011, in task ctx.managers[config['cluster']].wait_for_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 2919, in wait_for_clean num_active_clean = self.get_num_active_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 2698, in get_num_active_clean pgs = self.get_pg_stats() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 2464, in get_pg_stats out = self.raw_cluster_cmd('pg', 'dump', '--format=json') File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 1696, in raw_cluster_cmd return self.run_cluster_cmd(**kwargs).stdout.getvalue() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 1687, in run_cluster_cmd return self.controller.run(**kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/orchestra/daemon/state.py", line 146, in stop run.wait([self.proc], timeout=timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i a' 2026-03-20T19:06:55.171 INFO:tasks.ceph.mon.a:Stopped 2026-03-20T19:06:55.171 DEBUG:tasks.ceph.mon.c:waiting for process to exit 2026-03-20T19:06:55.171 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T19:06:55.171 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T19:06:55.171 ERROR:teuthology.orchestra.daemon.state:Error while waiting for process to exit Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph.py", line 2001, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph.py", line 1526, in run_daemon yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph.py", line 2011, in task ctx.managers[config['cluster']].wait_for_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 2919, in wait_for_clean num_active_clean = self.get_num_active_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 2698, in get_num_active_clean pgs = self.get_pg_stats() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 2464, in get_pg_stats out = self.raw_cluster_cmd('pg', 'dump', '--format=json') File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 1696, in raw_cluster_cmd return self.run_cluster_cmd(**kwargs).stdout.getvalue() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 1687, in run_cluster_cmd return self.controller.run(**kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/orchestra/daemon/state.py", line 146, in stop run.wait([self.proc], timeout=timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i c' 2026-03-20T19:06:55.171 INFO:tasks.ceph.mon.c:Stopped 2026-03-20T19:06:55.171 DEBUG:tasks.ceph.mon.b:waiting for process to exit 2026-03-20T19:06:55.171 INFO:teuthology.orchestra.run:waiting for 300 2026-03-20T19:06:55.171 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T19:06:55.171 ERROR:teuthology.orchestra.daemon.state:Error while waiting for process to exit Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph.py", line 2001, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph.py", line 1526, in run_daemon yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph.py", line 2011, in task ctx.managers[config['cluster']].wait_for_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 2919, in wait_for_clean num_active_clean = self.get_num_active_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 2698, in get_num_active_clean pgs = self.get_pg_stats() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 2464, in get_pg_stats out = self.raw_cluster_cmd('pg', 'dump', '--format=json') File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 1696, in raw_cluster_cmd return self.run_cluster_cmd(**kwargs).stdout.getvalue() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 1687, in run_cluster_cmd return self.controller.run(**kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/orchestra/daemon/state.py", line 146, in stop run.wait([self.proc], timeout=timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm02 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i b' 2026-03-20T19:06:55.172 INFO:tasks.ceph.mon.b:Stopped 2026-03-20T19:06:55.172 INFO:tasks.ceph:Checking cluster log for badness... 2026-03-20T19:06:55.172 DEBUG:teuthology.orchestra.run.vm00:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/ceph.log | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v '\(PG_AVAILABILITY\)' | egrep -v '\(PG_DEGRADED\)' | egrep -v '\(POOL_APP_NOT_ENABLED\)' | egrep -v 'not have an application enabled' | head -n 1 2026-03-20T19:06:55.198 INFO:teuthology.orchestra.run.vm00.stdout:2026-03-20T18:41:29.024475+0000 mon.a (mon.0) 704 : cluster [ERR] Health check failed: mon c is very low on available space (MON_DISK_CRIT) 2026-03-20T19:06:55.198 WARNING:tasks.ceph:Found errors (ERR|WRN|SEC) in cluster log 2026-03-20T19:06:55.198 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-0 on ubuntu@vm00.local 2026-03-20T19:06:55.198 DEBUG:teuthology.orchestra.run.vm00:> sync && sudo umount -f /var/lib/ceph/osd/ceph-0 2026-03-20T19:06:55.330 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-1 on ubuntu@vm00.local 2026-03-20T19:06:55.330 DEBUG:teuthology.orchestra.run.vm00:> sync && sudo umount -f /var/lib/ceph/osd/ceph-1 2026-03-20T19:06:55.423 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-2 on ubuntu@vm00.local 2026-03-20T19:06:55.424 DEBUG:teuthology.orchestra.run.vm00:> sync && sudo umount -f /var/lib/ceph/osd/ceph-2 2026-03-20T19:06:55.511 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-3 on ubuntu@vm00.local 2026-03-20T19:06:55.511 DEBUG:teuthology.orchestra.run.vm00:> sync && sudo umount -f /var/lib/ceph/osd/ceph-3 2026-03-20T19:06:55.599 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-4 on ubuntu@vm02.local 2026-03-20T19:06:55.599 DEBUG:teuthology.orchestra.run.vm02:> sync && sudo umount -f /var/lib/ceph/osd/ceph-4 2026-03-20T19:06:55.727 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-5 on ubuntu@vm02.local 2026-03-20T19:06:55.727 DEBUG:teuthology.orchestra.run.vm02:> sync && sudo umount -f /var/lib/ceph/osd/ceph-5 2026-03-20T19:06:55.826 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-6 on ubuntu@vm02.local 2026-03-20T19:06:55.826 DEBUG:teuthology.orchestra.run.vm02:> sync && sudo umount -f /var/lib/ceph/osd/ceph-6 2026-03-20T19:06:55.934 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-7 on ubuntu@vm02.local 2026-03-20T19:06:55.934 DEBUG:teuthology.orchestra.run.vm02:> sync && sudo umount -f /var/lib/ceph/osd/ceph-7 2026-03-20T19:06:56.035 INFO:tasks.ceph:Archiving mon data... 2026-03-20T19:06:56.035 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/lib/ceph/mon/ceph-a to /archive/kyr-2026-03-20_18:10:20-rgw-tentacle-none-default-vps/2719/data/mon.a.tgz 2026-03-20T19:06:56.036 DEBUG:teuthology.orchestra.run.vm00:> mktemp 2026-03-20T19:06:56.051 INFO:teuthology.orchestra.run.vm00.stdout:/tmp/tmp.8rRvAbVOfn 2026-03-20T19:06:56.051 DEBUG:teuthology.orchestra.run.vm00:> sudo tar cz -f - -C /var/lib/ceph/mon/ceph-a -- . > /tmp/tmp.8rRvAbVOfn 2026-03-20T19:06:56.184 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0666 /tmp/tmp.8rRvAbVOfn 2026-03-20T19:06:56.264 DEBUG:teuthology.orchestra.remote:vm00:/tmp/tmp.8rRvAbVOfn is 456KB 2026-03-20T19:06:56.322 DEBUG:teuthology.orchestra.run.vm00:> rm -fr /tmp/tmp.8rRvAbVOfn 2026-03-20T19:06:56.337 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/lib/ceph/mon/ceph-c to /archive/kyr-2026-03-20_18:10:20-rgw-tentacle-none-default-vps/2719/data/mon.c.tgz 2026-03-20T19:06:56.337 DEBUG:teuthology.orchestra.run.vm00:> mktemp 2026-03-20T19:06:56.394 INFO:teuthology.orchestra.run.vm00.stdout:/tmp/tmp.u4XUJs6BFS 2026-03-20T19:06:56.394 DEBUG:teuthology.orchestra.run.vm00:> sudo tar cz -f - -C /var/lib/ceph/mon/ceph-c -- . > /tmp/tmp.u4XUJs6BFS 2026-03-20T19:06:56.533 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0666 /tmp/tmp.u4XUJs6BFS 2026-03-20T19:06:56.622 DEBUG:teuthology.orchestra.remote:vm00:/tmp/tmp.u4XUJs6BFS is 475KB 2026-03-20T19:06:56.681 DEBUG:teuthology.orchestra.run.vm00:> rm -fr /tmp/tmp.u4XUJs6BFS 2026-03-20T19:06:56.696 DEBUG:teuthology.misc:Transferring archived files from vm02:/var/lib/ceph/mon/ceph-b to /archive/kyr-2026-03-20_18:10:20-rgw-tentacle-none-default-vps/2719/data/mon.b.tgz 2026-03-20T19:06:56.696 DEBUG:teuthology.orchestra.run.vm02:> mktemp 2026-03-20T19:06:56.712 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T19:06:56.712 INFO:teuthology.orchestra.run.vm02.stderr:mktemp: failed to create file via template ‘/tmp/tmp.XXXXXXXXXX’: No space left on device 2026-03-20T19:06:56.753 INFO:teuthology.util.scanner:summary_data or yaml_file is empty! 2026-03-20T19:06:56.768 INFO:teuthology.util.scanner:summary_data or yaml_file is empty! 2026-03-20T19:06:56.785 INFO:teuthology.util.scanner:summary_data or yaml_file is empty! 2026-03-20T19:06:56.785 INFO:tasks.ceph:Archiving crash dumps... 2026-03-20T19:06:56.785 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/lib/ceph/crash to /archive/kyr-2026-03-20_18:10:20-rgw-tentacle-none-default-vps/2719/remote/vm00/crash 2026-03-20T19:06:56.785 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /var/lib/ceph/crash -- . 2026-03-20T19:06:56.817 DEBUG:teuthology.misc:Transferring archived files from vm02:/var/lib/ceph/crash to /archive/kyr-2026-03-20_18:10:20-rgw-tentacle-none-default-vps/2719/remote/vm02/crash 2026-03-20T19:06:56.817 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /var/lib/ceph/crash -- . 2026-03-20T19:06:56.845 DEBUG:teuthology.misc:Transferring archived files from vm05:/var/lib/ceph/crash to /archive/kyr-2026-03-20_18:10:20-rgw-tentacle-none-default-vps/2719/remote/vm05/crash 2026-03-20T19:06:56.845 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /var/lib/ceph/crash -- . 2026-03-20T19:06:56.879 INFO:tasks.ceph:Compressing logs... 2026-03-20T19:06:56.879 DEBUG:teuthology.orchestra.run.vm00:> time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-20T19:06:56.881 DEBUG:teuthology.orchestra.run.vm02:> time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-20T19:06:56.889 DEBUG:teuthology.orchestra.run.vm05:> time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-20T19:06:56.903 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph.tmp-client.admin.51995.log 2026-03-20T19:06:56.903 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.0.log 2026-03-20T19:06:56.904 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.1.log 2026-03-20T19:06:56.904 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph.tmp-client.admin.51995.log: gzip -5 --verbose -- /var/log/ceph/ceph-osd.2.log 2026-03-20T19:06:56.904 INFO:teuthology.orchestra.run.vm00.stderr: 0.0% -- replaced with /var/log/ceph/ceph.tmp-client.admin.51995.log.gz 2026-03-20T19:06:56.904 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-osd.0.log: gzip -5 --verbose -- /var/log/ceph/ceph-osd.3.log 2026-03-20T19:06:56.904 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/ceph-mon.a.log 2026-03-20T19:06:56.908 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-osd.3.log: gzip -5 --verbose -- /var/log/ceph/ceph-mon.c.log 2026-03-20T19:06:56.914 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.4.log 2026-03-20T19:06:56.915 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.5.log 2026-03-20T19:06:56.915 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.6.log 2026-03-20T19:06:56.915 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-osd.4.log.gz: No space left on device 2026-03-20T19:06:56.916 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-osd.5.log.gz: No space left on device 2026-03-20T19:06:56.916 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.7.log 2026-03-20T19:06:56.916 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-osd.6.log.gz: No space left on device 2026-03-20T19:06:56.916 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-mon.b.log 2026-03-20T19:06:56.916 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-osd.7.log.gz: No space left on device 2026-03-20T19:06:56.916 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph.log 2026-03-20T19:06:56.917 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-mon.b.log.gz: No space left on device 2026-03-20T19:06:56.917 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-mgr.x.log 2026-03-20T19:06:56.917 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph.log.gz: No space left on device 2026-03-20T19:06:56.917 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.56862.log 2026-03-20T19:06:56.917 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-mgr.x.log.gz: No space left on device 2026-03-20T19:06:56.918 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.56909.log 2026-03-20T19:06:56.918 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-client.admin.56862.log.gz: No space left on device 2026-03-20T19:06:56.918 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.56956.log 2026-03-20T19:06:56.918 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-client.admin.56909.log.gz: No space left on device 2026-03-20T19:06:56.918 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph.audit.log 2026-03-20T19:06:56.919 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-client.admin.56956.log.gz: No space left on device 2026-03-20T19:06:56.919 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.57003.log 2026-03-20T19:06:56.919 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph.audit.log.gz: No space left on device 2026-03-20T19:06:56.919 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.57050.log 2026-03-20T19:06:56.919 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-client.admin.57003.log.gz: No space left on device 2026-03-20T19:06:56.919 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.57097.log 2026-03-20T19:06:56.920 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.58428.log 2026-03-20T19:06:56.920 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-client.admin.57050.log.gz: No space left on device 2026-03-20T19:06:56.920 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.57144.log 2026-03-20T19:06:56.920 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-client.admin.57097.log.gz: No space left on device 2026-03-20T19:06:56.920 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.57191.log 2026-03-20T19:06:56.920 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-client.admin.57144.log.gz: No space left on device 2026-03-20T19:06:56.920 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.60726.log 2026-03-20T19:06:56.921 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-client.admin.57191.log.gz: No space left on device 2026-03-20T19:06:56.921 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.60773.log 2026-03-20T19:06:56.921 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-client.admin.60726.log.gz: No space left on device 2026-03-20T19:06:56.921 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.60820.log 2026-03-20T19:06:56.921 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.60867.log 2026-03-20T19:06:56.921 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-client.admin.60773.log.gz: No space left on device 2026-03-20T19:06:56.922 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-client.admin.60820.log.gz: No space left on device 2026-03-20T19:06:56.922 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.1.60890.log 2026-03-20T19:06:56.922 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbosegzip: -- /var/log/ceph/ceph-client.1.60997.log 2026-03-20T19:06:56.922 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/ceph-client.admin.60867.log.gz: No space left on device 2026-03-20T19:06:56.922 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-client.1.60890.log.gz: No space left on device 2026-03-20T19:06:56.922 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.1.61099.log 2026-03-20T19:06:56.923 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-client.1.60997.log.gz: No space left on device 2026-03-20T19:06:56.923 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.1.61201.log 2026-03-20T19:06:56.923 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-client.1.61099.log.gz: No space left on device 2026-03-20T19:06:56.923 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.1.61303.log 2026-03-20T19:06:56.923 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-client.1.61201.log.gz: No space left on device 2026-03-20T19:06:56.923 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/rgw.ceph.client.1.log 2026-03-20T19:06:56.924 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ceph-client.1.61303.log.gz: No space left on device 2026-03-20T19:06:56.924 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/ops-log-ceph-client.1.log 2026-03-20T19:06:56.924 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/rgw.ceph.client.1.log.gz: No space left on device 2026-03-20T19:06:56.924 INFO:teuthology.orchestra.run.vm02.stderr:gzip: /var/log/ceph/ops-log-ceph-client.1.log.gz: No space left on device 2026-03-20T19:06:56.927 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-20T19:06:56.927 INFO:teuthology.orchestra.run.vm02.stderr:real 0m0.023s 2026-03-20T19:06:56.927 INFO:teuthology.orchestra.run.vm02.stderr:user 0m0.016s 2026-03-20T19:06:56.927 INFO:teuthology.orchestra.run.vm02.stderr:sys 0m0.024s 2026-03-20T19:06:56.936 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-osd.2.log: /var/log/ceph/ceph-mon.c.log: gzip -5 --verbose -- /var/log/ceph/ceph.log 2026-03-20T19:06:56.936 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.58428.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.58428.log.gz 2026-03-20T19:06:56.946 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-mgr.y.log 2026-03-20T19:06:56.948 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph.log: 92.9% -- replaced with /var/log/ceph/ceph.log.gz 2026-03-20T19:06:56.948 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.58498.log 2026-03-20T19:06:56.949 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.50167.log 2026-03-20T19:06:56.950 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.50214.log 2026-03-20T19:06:56.950 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.50261.log 2026-03-20T19:06:56.950 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/ceph-client.admin.50167.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.50167.log.gz 2026-03-20T19:06:56.950 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/ceph-client.admin.50214.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.50214.log.gz 2026-03-20T19:06:56.951 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.50308.log 2026-03-20T19:06:56.951 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/ceph-client.admin.50261.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.50261.log.gz 2026-03-20T19:06:56.951 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.2.50331.log 2026-03-20T19:06:56.951 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/ceph-client.admin.50308.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.50308.log.gz 2026-03-20T19:06:56.951 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.2.50438.log 2026-03-20T19:06:56.951 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/ceph-client.2.50331.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.2.50540.log 2026-03-20T19:06:56.951 INFO:teuthology.orchestra.run.vm05.stderr: 83.1% -- replaced with /var/log/ceph/ceph-client.2.50331.log.gz 2026-03-20T19:06:56.952 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/ceph-client.2.50438.log: 45.3% -- replaced with /var/log/ceph/ceph-client.2.50438.log.gz 2026-03-20T19:06:56.952 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.2.50642.log 2026-03-20T19:06:56.952 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/ceph-client.2.50540.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.2.50744.log 2026-03-20T19:06:56.952 INFO:teuthology.orchestra.run.vm05.stderr: 43.5% -- replaced with /var/log/ceph/ceph-client.2.50540.log.gz 2026-03-20T19:06:56.952 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/ceph-client.2.50642.log: 44.9% -- replaced with /var/log/ceph/ceph-client.2.50642.log.gz 2026-03-20T19:06:56.953 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/rgw.ceph.client.2.log 2026-03-20T19:06:56.953 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/ops-log-ceph-client.2.log 2026-03-20T19:06:56.953 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/rgw.ceph.client.2.log: /var/log/ceph/ceph-client.2.50744.log: 45.6% -- replaced with /var/log/ceph/ceph-client.2.50744.log.gz 2026-03-20T19:06:56.953 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/ops-log-ceph-client.2.log: 35.1% -- replaced with /var/log/ceph/ops-log-ceph-client.2.log.gz 2026-03-20T19:06:56.965 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-mgr.y.log: gzip -5 --verbose -- /var/log/ceph/ceph.audit.log 2026-03-20T19:06:56.965 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.58498.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.58498.log.gz 2026-03-20T19:06:56.965 INFO:teuthology.orchestra.run.vm00.stderr: 94.5% -- replaced with /var/log/ceph/ceph-mgr.y.log.gz 2026-03-20T19:06:56.974 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.58764.log 2026-03-20T19:06:56.981 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph.audit.log: 94.4% -- replaced with /var/log/ceph/ceph.audit.log.gz 2026-03-20T19:06:56.981 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62065.log 2026-03-20T19:06:56.982 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.58764.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.58764.log.gz 2026-03-20T19:06:56.987 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62131.log 2026-03-20T19:06:56.987 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62065.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62065.log.gz 2026-03-20T19:06:56.995 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62201.log 2026-03-20T19:06:56.995 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62131.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62131.log.gz 2026-03-20T19:06:56.995 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62226.log 2026-03-20T19:06:56.996 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62201.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62201.log.gz 2026-03-20T19:06:57.001 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62291.log 2026-03-20T19:06:57.002 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62226.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62226.log.gz 2026-03-20T19:06:57.004 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62340.log 2026-03-20T19:06:57.007 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62291.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62291.log.gz 2026-03-20T19:06:57.007 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62389.log 2026-03-20T19:06:57.007 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62340.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62340.log.gz 2026-03-20T19:06:57.008 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62438.log 2026-03-20T19:06:57.010 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62389.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62389.log.gz 2026-03-20T19:06:57.010 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62664.log 2026-03-20T19:06:57.011 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62438.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62438.log.gz 2026-03-20T19:06:57.013 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62670.log 2026-03-20T19:06:57.014 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62664.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62664.log.gz 2026-03-20T19:06:57.014 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62676.log 2026-03-20T19:06:57.016 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62670.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62670.log.gz 2026-03-20T19:06:57.031 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62672.log 2026-03-20T19:06:57.031 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62676.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62676.log.gz 2026-03-20T19:06:57.035 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62675.log 2026-03-20T19:06:57.036 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62672.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62672.log.gz 2026-03-20T19:06:57.039 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62667.log 2026-03-20T19:06:57.039 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62675.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62675.log.gz 2026-03-20T19:06:57.039 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62653.log 2026-03-20T19:06:57.047 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62667.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62667.log.gz 2026-03-20T19:06:57.049 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62665.log 2026-03-20T19:06:57.051 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62653.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62653.log.gz 2026-03-20T19:06:57.052 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.62923.log 2026-03-20T19:06:57.055 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62665.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62665.log.gz 2026-03-20T19:06:57.063 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63004.log 2026-03-20T19:06:57.063 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.62923.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.62923.log.gz 2026-03-20T19:06:57.063 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63011.log 2026-03-20T19:06:57.063 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63004.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63004.log.gz 2026-03-20T19:06:57.065 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63010.log 2026-03-20T19:06:57.075 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63011.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63011.log.gz 2026-03-20T19:06:57.078 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63014.log 2026-03-20T19:06:57.078 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63010.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63010.log.gz 2026-03-20T19:06:57.083 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63000.log 2026-03-20T19:06:57.085 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63014.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63014.log.gz 2026-03-20T19:06:57.094 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63098.log 2026-03-20T19:06:57.094 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63000.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63000.log.gz 2026-03-20T19:06:57.100 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63099.log 2026-03-20T19:06:57.100 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63098.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63098.log.gz 2026-03-20T19:06:57.109 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63298.log 2026-03-20T19:06:57.109 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63099.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63099.log.gz 2026-03-20T19:06:57.109 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63297.log 2026-03-20T19:06:57.109 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63298.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63365.log 2026-03-20T19:06:57.109 INFO:teuthology.orchestra.run.vm00.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63298.log.gz 2026-03-20T19:06:57.110 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63297.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63297.log.gz 2026-03-20T19:06:57.110 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63394.log 2026-03-20T19:06:57.110 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63365.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63365.log.gz 2026-03-20T19:06:57.110 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63490.log 2026-03-20T19:06:57.110 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63394.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63497.log 2026-03-20T19:06:57.110 INFO:teuthology.orchestra.run.vm00.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63394.log.gz 2026-03-20T19:06:57.110 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63490.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63490.log.gz 2026-03-20T19:06:57.110 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63489.log 2026-03-20T19:06:57.111 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63497.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63580.log 2026-03-20T19:06:57.111 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63489.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63489.log.gz 2026-03-20T19:06:57.116 INFO:teuthology.orchestra.run.vm00.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63497.log.gz 2026-03-20T19:06:57.116 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63663.log 2026-03-20T19:06:57.117 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63580.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63712.log 2026-03-20T19:06:57.117 INFO:teuthology.orchestra.run.vm00.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63580.log.gz 2026-03-20T19:06:57.117 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63663.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63663.log.gz 2026-03-20T19:06:57.133 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63761.log 2026-03-20T19:06:57.138 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63712.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63808.log 2026-03-20T19:06:57.139 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63761.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63712.log.gz 2026-03-20T19:06:57.139 INFO:teuthology.orchestra.run.vm00.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63761.log.gz 2026-03-20T19:06:57.140 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63857.log 2026-03-20T19:06:57.140 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63808.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63808.log.gz 2026-03-20T19:06:57.145 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63904.log 2026-03-20T19:06:57.145 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63857.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63857.log.gz 2026-03-20T19:06:57.156 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.63953.log 2026-03-20T19:06:57.156 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63904.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63904.log.gz 2026-03-20T19:06:57.161 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.0.63976.log 2026-03-20T19:06:57.161 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.63953.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.63953.log.gz 2026-03-20T19:06:57.172 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.0.64091.log 2026-03-20T19:06:57.172 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.63976.log: 95.1% -- replaced with /var/log/ceph/ceph-client.0.63976.log.gz 2026-03-20T19:06:57.177 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.0.64193.log 2026-03-20T19:06:57.178 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.64091.log: 45.3% -- replaced with /var/log/ceph/ceph-client.0.64091.log.gz 2026-03-20T19:06:57.189 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.0.64295.log 2026-03-20T19:06:57.189 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.64193.log: 43.5% -- replaced with /var/log/ceph/ceph-client.0.64193.log.gz 2026-03-20T19:06:57.195 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.0.64397.log 2026-03-20T19:06:57.197 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.64295.log: 44.9% -- replaced with /var/log/ceph/ceph-client.0.64295.log.gz 2026-03-20T19:06:57.204 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/rgw.ceph.client.0.log 2026-03-20T19:06:57.204 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.64397.log: 45.3% -- replaced with /var/log/ceph/ceph-client.0.64397.log.gz 2026-03-20T19:06:57.209 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ops-log-ceph-client.0.log 2026-03-20T19:06:57.220 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/rgw.ceph.client.0.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.0.65392.log 2026-03-20T19:06:57.227 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ops-log-ceph-client.0.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.65581.log 2026-03-20T19:06:57.232 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.65392.log: 84.5% -- replaced with /var/log/ceph/ceph-client.0.65392.log.gz 2026-03-20T19:06:57.236 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.65722.log 2026-03-20T19:06:57.236 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.65581.log: 83.3% -- replaced with /var/log/ceph/ceph-client.admin.65581.log.gz 2026-03-20T19:06:57.247 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.65756.log 2026-03-20T19:06:57.247 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.65722.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.65722.log.gz 2026-03-20T19:06:57.254 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.65790.log 2026-03-20T19:06:57.254 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.65756.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.65756.log.gz 2026-03-20T19:06:57.272 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.65887.log 2026-03-20T19:06:57.276 INFO:teuthology.orchestra.run.vm05.stderr: 93.6% -- replaced with /var/log/ceph/rgw.ceph.client.2.log.gz 2026-03-20T19:06:57.277 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.65790.log: 85.3% -- replaced with /var/log/ceph/ceph-client.admin.65790.log.gz 2026-03-20T19:06:57.278 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-20T19:06:57.278 INFO:teuthology.orchestra.run.vm05.stderr:real 0m0.340s 2026-03-20T19:06:57.278 INFO:teuthology.orchestra.run.vm05.stderr:user 0m0.318s 2026-03-20T19:06:57.278 INFO:teuthology.orchestra.run.vm05.stderr:sys 0m0.036s 2026-03-20T19:06:57.284 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.65984.log 2026-03-20T19:06:57.290 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.65887.log: 92.4% -- replaced with /var/log/ceph/ceph-mon.c.log.gz 2026-03-20T19:06:57.290 INFO:teuthology.orchestra.run.vm00.stderr: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.65887.log.gz 2026-03-20T19:06:57.306 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66151.log 2026-03-20T19:06:57.306 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.65984.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.65984.log.gz 2026-03-20T19:06:57.316 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66185.log 2026-03-20T19:06:57.316 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66151.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66151.log.gz 2026-03-20T19:06:57.331 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66219.log 2026-03-20T19:06:57.331 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66185.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66185.log.gz 2026-03-20T19:06:57.354 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66330.log 2026-03-20T19:06:57.354 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66219.log: 82.5% -- replaced with /var/log/ceph/ceph-client.admin.66219.log.gz 2026-03-20T19:06:57.364 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66428.log 2026-03-20T19:06:57.372 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66330.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.66330.log.gz 2026-03-20T19:06:57.381 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66525.log 2026-03-20T19:06:57.390 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66428.log: 93.1% -- replaced with /var/log/ceph/ceph-client.admin.66428.log.gz 2026-03-20T19:06:57.400 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66559.log 2026-03-20T19:06:57.400 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66525.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66525.log.gz 2026-03-20T19:06:57.415 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66595.log 2026-03-20T19:06:57.415 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66559.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66559.log.gz 2026-03-20T19:06:57.435 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.66629.log 2026-03-20T19:06:57.435 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66595.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66595.log.gz 2026-03-20T19:06:57.445 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67456.log 2026-03-20T19:06:57.445 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.66629.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.66629.log.gz 2026-03-20T19:06:57.464 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67490.log 2026-03-20T19:06:57.464 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67456.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67456.log.gz 2026-03-20T19:06:57.479 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67524.log 2026-03-20T19:06:57.479 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67490.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67490.log.gz 2026-03-20T19:06:57.494 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67623.log 2026-03-20T19:06:57.495 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67524.log: 82.8% -- replaced with /var/log/ceph/ceph-client.admin.67524.log.gz 2026-03-20T19:06:57.515 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67721.log 2026-03-20T19:06:57.525 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67623.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.67623.log.gz 2026-03-20T19:06:57.532 INFO:teuthology.orchestra.run.vm00.stderr: 91.2% -- replaced with /var/log/ceph/ceph-mon.a.log.gz 2026-03-20T19:06:57.535 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67818.log 2026-03-20T19:06:57.547 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67721.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67852.log 2026-03-20T19:06:57.547 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67818.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67818.log.gz 2026-03-20T19:06:57.550 INFO:teuthology.orchestra.run.vm00.stderr: 96.8% -- replaced with /var/log/ceph/ceph-client.admin.67721.log.gz 2026-03-20T19:06:57.561 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67888.log 2026-03-20T19:06:57.561 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67852.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67852.log.gz 2026-03-20T19:06:57.576 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.67922.log 2026-03-20T19:06:57.576 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67888.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67888.log.gz 2026-03-20T19:06:57.591 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.68029.log 2026-03-20T19:06:57.591 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.67922.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.67922.log.gz 2026-03-20T19:06:57.607 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.68063.log 2026-03-20T19:06:57.607 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.68029.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.68029.log.gz 2026-03-20T19:06:57.617 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.68097.log 2026-03-20T19:06:57.617 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.68063.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.68063.log.gz 2026-03-20T19:06:57.622 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.68200.log 2026-03-20T19:06:57.632 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.68097.log: gzip 82.9% -5 -- replaced with /var/log/ceph/ceph-client.admin.68097.log.gz --verbose 2026-03-20T19:06:57.632 INFO:teuthology.orchestra.run.vm00.stderr: -- /var/log/ceph/ceph-client.admin.68298.log 2026-03-20T19:06:57.632 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.68200.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.68200.log.gz 2026-03-20T19:06:57.647 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.68395.log 2026-03-20T19:06:57.647 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.68298.log: 92.7% -- replaced with /var/log/ceph/ceph-client.admin.68298.log.gz 2026-03-20T19:06:57.656 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.68429.log 2026-03-20T19:06:57.656 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.68395.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.68395.log.gz 2026-03-20T19:06:57.663 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.68465.log 2026-03-20T19:06:57.663 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.68429.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.68429.log.gz 2026-03-20T19:06:57.673 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.68499.log 2026-03-20T19:06:57.673 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.68465.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.68465.log.gz 2026-03-20T19:06:57.679 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.68705.log 2026-03-20T19:06:57.679 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.68499.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.68499.log.gz 2026-03-20T19:06:57.689 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.68739.log 2026-03-20T19:06:57.689 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.68705.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.68705.log.gz 2026-03-20T19:06:57.696 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.68773.log 2026-03-20T19:06:57.696 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.68739.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.68739.log.gz 2026-03-20T19:06:57.705 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.68876.log 2026-03-20T19:06:57.713 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.68773.log: 82.9% -- replaced with /var/log/ceph/ceph-client.admin.68773.log.gz 2026-03-20T19:06:57.715 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.68974.log 2026-03-20T19:06:57.715 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.68876.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.68876.log.gz 2026-03-20T19:06:57.725 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69071.log 2026-03-20T19:06:57.725 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.68974.log: 89.6% -- replaced with /var/log/ceph/ceph-client.admin.68974.log.gz 2026-03-20T19:06:57.729 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69105.log 2026-03-20T19:06:57.739 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69071.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.69071.log.gz 2026-03-20T19:06:57.740 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69141.log 2026-03-20T19:06:57.740 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69105.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.69105.log.gz 2026-03-20T19:06:57.749 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69175.log 2026-03-20T19:06:57.749 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69141.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.69141.log.gz 2026-03-20T19:06:57.759 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69273.log 2026-03-20T19:06:57.759 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69175.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.69175.log.gz 2026-03-20T19:06:57.764 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69307.log 2026-03-20T19:06:57.773 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69273.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.69273.log.gz 2026-03-20T19:06:57.778 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69341.log 2026-03-20T19:06:57.778 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69307.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.69307.log.gz 2026-03-20T19:06:57.789 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69444.log 2026-03-20T19:06:57.789 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69341.log: 83.1% -- replaced with /var/log/ceph/ceph-client.admin.69341.log.gz 2026-03-20T19:06:57.799 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69542.log 2026-03-20T19:06:57.799 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69444.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.69444.log.gz 2026-03-20T19:06:57.807 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69639.log 2026-03-20T19:06:57.814 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69542.log: 93.2% -- replaced with /var/log/ceph/ceph-client.admin.69542.log.gz 2026-03-20T19:06:57.817 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69673.log 2026-03-20T19:06:57.817 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69639.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.69639.log.gz 2026-03-20T19:06:57.829 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69708.log 2026-03-20T19:06:57.829 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69673.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.69673.log.gz 2026-03-20T19:06:57.840 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.69742.log 2026-03-20T19:06:57.840 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69708.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.69708.log.gz 2026-03-20T19:06:57.844 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70588.log 2026-03-20T19:06:57.854 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.69742.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.69742.log.gz 2026-03-20T19:06:57.855 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70622.log 2026-03-20T19:06:57.855 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70588.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.70588.log.gz 2026-03-20T19:06:57.861 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70656.log 2026-03-20T19:06:57.861 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70622.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.70622.log.gz 2026-03-20T19:06:57.875 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70759.log 2026-03-20T19:06:57.875 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70656.log: 82.8% -- replaced with /var/log/ceph/ceph-client.admin.70656.log.gz 2026-03-20T19:06:57.880 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70857.log 2026-03-20T19:06:57.890 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70759.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.70759.log.gz 2026-03-20T19:06:57.890 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70954.log 2026-03-20T19:06:57.891 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70857.log: 90.3% -- replaced with /var/log/ceph/ceph-client.admin.70857.log.gz 2026-03-20T19:06:57.897 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.70988.log 2026-03-20T19:06:57.897 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70954.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.70954.log.gz 2026-03-20T19:06:57.907 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71024.log 2026-03-20T19:06:57.907 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.70988.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.70988.log.gz 2026-03-20T19:06:57.913 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71058.log 2026-03-20T19:06:57.913 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71024.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.71024.log.gz 2026-03-20T19:06:57.923 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71092.log 2026-03-20T19:06:57.923 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71058.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.71058.log.gz 2026-03-20T19:06:57.929 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71189.log 2026-03-20T19:06:57.938 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71092.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.71092.log.gz 2026-03-20T19:06:57.938 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71286.log 2026-03-20T19:06:57.939 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71189.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.71189.log.gz 2026-03-20T19:06:57.946 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71671.log 2026-03-20T19:06:57.954 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71286.log: 85.4% -- replaced with /var/log/ceph/ceph-client.admin.71286.log.gz 2026-03-20T19:06:57.956 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71705.log 2026-03-20T19:06:57.956 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71671.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.71671.log.gz 2026-03-20T19:06:57.969 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71739.log 2026-03-20T19:06:57.969 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71705.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.71705.log.gz 2026-03-20T19:06:57.979 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71842.log 2026-03-20T19:06:57.979 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71739.log: 83.0% -- replaced with /var/log/ceph/ceph-client.admin.71739.log.gz 2026-03-20T19:06:57.984 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.71940.log 2026-03-20T19:06:57.994 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71842.log: 84.8% -- replaced with /var/log/ceph/ceph-client.admin.71842.log.gz 2026-03-20T19:06:57.994 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72037.log 2026-03-20T19:06:57.995 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.71940.log: 94.2% -- replaced with /var/log/ceph/ceph-client.admin.71940.log.gz 2026-03-20T19:06:58.001 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72071.log 2026-03-20T19:06:58.002 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72037.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.72037.log.gz 2026-03-20T19:06:58.012 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72107.log 2026-03-20T19:06:58.012 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72071.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.72071.log.gz 2026-03-20T19:06:58.018 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72141.log 2026-03-20T19:06:58.018 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72107.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.72107.log.gz 2026-03-20T19:06:58.025 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72175.log 2026-03-20T19:06:58.026 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72141.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.72141.log.gz 2026-03-20T19:06:58.034 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72272.log 2026-03-20T19:06:58.035 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72175.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.72175.log.gz 2026-03-20T19:06:58.044 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.72369.log 2026-03-20T19:06:58.045 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72272.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.72272.log.gz 2026-03-20T19:06:58.050 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73207.log 2026-03-20T19:06:58.059 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.72369.log: 85.3% -- replaced with /var/log/ceph/ceph-client.admin.72369.log.gz 2026-03-20T19:06:58.060 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73241.log 2026-03-20T19:06:58.060 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73207.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.73207.log.gz 2026-03-20T19:06:58.074 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73275.log 2026-03-20T19:06:58.075 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73241.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.73241.log.gz 2026-03-20T19:06:58.085 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73378.log 2026-03-20T19:06:58.085 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73275.log: 83.1% -- replaced with /var/log/ceph/ceph-client.admin.73275.log.gz 2026-03-20T19:06:58.090 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73476.log 2026-03-20T19:06:58.099 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73378.log: 84.8% -- replaced with /var/log/ceph/ceph-client.admin.73378.log.gz 2026-03-20T19:06:58.103 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73573.log 2026-03-20T19:06:58.114 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73476.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73607.log 2026-03-20T19:06:58.114 INFO:teuthology.orchestra.run.vm00.stderr: 96.5% -- replaced with /var/log/ceph/ceph-client.admin.73476.log.gz 2026-03-20T19:06:58.114 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73573.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.73573.log.gz 2026-03-20T19:06:58.114 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73643.log 2026-03-20T19:06:58.117 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73607.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.73607.log.gz 2026-03-20T19:06:58.123 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73677.log 2026-03-20T19:06:58.124 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73643.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.73643.log.gz 2026-03-20T19:06:58.136 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73711.log 2026-03-20T19:06:58.137 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73677.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.73677.log.gz 2026-03-20T19:06:58.146 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73808.log 2026-03-20T19:06:58.147 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73711.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.73711.log.gz 2026-03-20T19:06:58.151 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.73905.log 2026-03-20T19:06:58.161 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73808.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.73808.log.gz 2026-03-20T19:06:58.161 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74002.log 2026-03-20T19:06:58.162 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.73905.log: 84.8% -- replaced with /var/log/ceph/ceph-client.admin.73905.log.gz 2026-03-20T19:06:58.175 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74283.log 2026-03-20T19:06:58.176 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74002.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.74002.log.gz 2026-03-20T19:06:58.185 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74317.log 2026-03-20T19:06:58.186 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74283.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.74283.log.gz 2026-03-20T19:06:58.191 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74351.log 2026-03-20T19:06:58.192 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74317.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.74317.log.gz 2026-03-20T19:06:58.195 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74454.log 2026-03-20T19:06:58.195 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74552.log 2026-03-20T19:06:58.196 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74351.log: /var/log/ceph/ceph-client.admin.74454.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74649.log 2026-03-20T19:06:58.196 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74552.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.74454.log.gz 2026-03-20T19:06:58.200 INFO:teuthology.orchestra.run.vm00.stderr: 82.9% -- replaced with /var/log/ceph/ceph-client.admin.74351.log.gz 2026-03-20T19:06:58.205 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74683.log 2026-03-20T19:06:58.208 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74649.log: 89.3% -- replaced with /var/log/ceph/ceph-client.admin.74552.log.gz 2026-03-20T19:06:58.213 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74719.log 2026-03-20T19:06:58.213 INFO:teuthology.orchestra.run.vm00.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.74649.log.gz 2026-03-20T19:06:58.213 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74683.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.74683.log.gz 2026-03-20T19:06:58.217 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74753.log 2026-03-20T19:06:58.218 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74719.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.74719.log.gz 2026-03-20T19:06:58.228 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74787.log 2026-03-20T19:06:58.228 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74753.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.74753.log.gz 2026-03-20T19:06:58.232 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74884.log 2026-03-20T19:06:58.242 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74787.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.74787.log.gz 2026-03-20T19:06:58.243 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.74981.log 2026-03-20T19:06:58.243 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74884.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.74884.log.gz 2026-03-20T19:06:58.257 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75078.log 2026-03-20T19:06:58.257 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.74981.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.74981.log.gz 2026-03-20T19:06:58.267 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75175.log 2026-03-20T19:06:58.267 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75078.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.75078.log.gz 2026-03-20T19:06:58.271 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75272.log 2026-03-20T19:06:58.281 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75175.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75369.log 2026-03-20T19:06:58.281 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75272.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.75175.log.gz 2026-03-20T19:06:58.289 INFO:teuthology.orchestra.run.vm00.stderr: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.75272.log.gz 2026-03-20T19:06:58.298 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75466.log 2026-03-20T19:06:58.298 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75369.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.75369.log.gz 2026-03-20T19:06:58.308 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75563.log 2026-03-20T19:06:58.308 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75466.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.75466.log.gz 2026-03-20T19:06:58.312 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75660.log 2026-03-20T19:06:58.314 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75563.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.75563.log.gz 2026-03-20T19:06:58.317 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75757.log 2026-03-20T19:06:58.320 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75660.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.75660.log.gz 2026-03-20T19:06:58.322 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75854.log 2026-03-20T19:06:58.324 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75757.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.75757.log.gz 2026-03-20T19:06:58.326 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.75951.log 2026-03-20T19:06:58.331 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75854.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.75854.log.gz 2026-03-20T19:06:58.333 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76048.log 2026-03-20T19:06:58.335 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.75951.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.75951.log.gz 2026-03-20T19:06:58.335 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76145.log 2026-03-20T19:06:58.341 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76048.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.76048.log.gz 2026-03-20T19:06:58.345 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76242.log 2026-03-20T19:06:58.346 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76145.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.76145.log.gz 2026-03-20T19:06:58.356 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76339.log 2026-03-20T19:06:58.357 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76242.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.76242.log.gz 2026-03-20T19:06:58.366 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76436.log 2026-03-20T19:06:58.366 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76339.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.76339.log.gz 2026-03-20T19:06:58.367 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76533.log 2026-03-20T19:06:58.370 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76436.log: 85.3% -- replaced with /var/log/ceph/ceph-client.admin.76436.log.gz 2026-03-20T19:06:58.383 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76630.log 2026-03-20T19:06:58.384 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76533.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.76533.log.gz 2026-03-20T19:06:58.393 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76727.log 2026-03-20T19:06:58.394 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76630.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.76630.log.gz 2026-03-20T19:06:58.398 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76824.log 2026-03-20T19:06:58.409 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76727.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.76727.log.gz 2026-03-20T19:06:58.409 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.76921.log 2026-03-20T19:06:58.409 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76824.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.76824.log.gz 2026-03-20T19:06:58.415 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77018.log 2026-03-20T19:06:58.423 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.76921.log: 85.3% -- replaced with /var/log/ceph/ceph-client.admin.76921.log.gz 2026-03-20T19:06:58.425 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77115.log 2026-03-20T19:06:58.426 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77018.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.77018.log.gz 2026-03-20T19:06:58.431 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77212.log 2026-03-20T19:06:58.431 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77115.log: 85.3% -- replaced with /var/log/ceph/ceph-client.admin.77115.log.gz 2026-03-20T19:06:58.431 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77309.log 2026-03-20T19:06:58.432 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77212.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.77212.log.gz 2026-03-20T19:06:58.432 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77406.log 2026-03-20T19:06:58.432 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77309.log: 85.3% -- replaced with /var/log/ceph/ceph-client.admin.77309.log.gz 2026-03-20T19:06:58.432 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77503.log 2026-03-20T19:06:58.433 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77406.log: 85.4% -- replaced with /var/log/ceph/ceph-client.admin.77406.log.gz 2026-03-20T19:06:58.433 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77600.log 2026-03-20T19:06:58.433 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77503.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.77503.log.gz 2026-03-20T19:06:58.434 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77697.log 2026-03-20T19:06:58.440 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77600.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.77600.log.gz 2026-03-20T19:06:58.450 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77794.log 2026-03-20T19:06:58.450 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77697.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.77697.log.gz 2026-03-20T19:06:58.459 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77891.log 2026-03-20T19:06:58.460 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77794.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.77794.log.gz 2026-03-20T19:06:58.464 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.77988.log 2026-03-20T19:06:58.474 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77891.log: 84.8% -- replaced with /var/log/ceph/ceph-client.admin.77891.log.gz 2026-03-20T19:06:58.474 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78085.log 2026-03-20T19:06:58.475 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.77988.log: 84.8% -- replaced with /var/log/ceph/ceph-client.admin.77988.log.gz 2026-03-20T19:06:58.482 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78182.log 2026-03-20T19:06:58.482 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78085.log: 92.3% -- replaced with /var/log/ceph/ops-log-ceph-client.0.log.gz 2026-03-20T19:06:58.482 INFO:teuthology.orchestra.run.vm00.stderr: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.78085.log.gz 2026-03-20T19:06:58.498 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78279.log 2026-03-20T19:06:58.505 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78182.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78376.log 2026-03-20T19:06:58.506 INFO:teuthology.orchestra.run.vm00.stderr: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.78182.log.gz/var/log/ceph/ceph-client.admin.78279.log: 2026-03-20T19:06:58.510 INFO:teuthology.orchestra.run.vm00.stderr: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.78279.log.gz 2026-03-20T19:06:58.516 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78473.log 2026-03-20T19:06:58.516 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78376.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.78376.log.gz 2026-03-20T19:06:58.523 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78570.log 2026-03-20T19:06:58.525 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78473.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.78473.log.gz 2026-03-20T19:06:58.527 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78667.log 2026-03-20T19:06:58.528 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78570.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.78570.log.gz 2026-03-20T19:06:58.531 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78764.log 2026-03-20T19:06:58.531 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78861.log 2026-03-20T19:06:58.534 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78667.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.78667.log.gz 2026-03-20T19:06:58.535 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.78958.log 2026-03-20T19:06:58.535 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79055.log 2026-03-20T19:06:58.535 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79152.log 2026-03-20T19:06:58.537 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79249.log 2026-03-20T19:06:58.537 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79346.log 2026-03-20T19:06:58.537 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79443.log 2026-03-20T19:06:58.538 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.78958.log: /var/log/ceph/ceph-client.admin.78764.log: /var/log/ceph/ceph-client.admin.79055.log: /var/log/ceph/ceph-client.admin.79152.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.78764.log.gz 2026-03-20T19:06:58.538 INFO:teuthology.orchestra.run.vm00.stderr: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.79055.log.gz 2026-03-20T19:06:58.538 INFO:teuthology.orchestra.run.vm00.stderr: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.78958.log.gz 2026-03-20T19:06:58.539 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79249.log: /var/log/ceph/ceph-client.admin.78861.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.78861.log.gz 2026-03-20T19:06:58.539 INFO:teuthology.orchestra.run.vm00.stderr: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.79249.log.gz 2026-03-20T19:06:58.540 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79540.log 2026-03-20T19:06:58.541 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79443.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.79443.log.gz 2026-03-20T19:06:58.541 INFO:teuthology.orchestra.run.vm00.stderr: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.79152.log.gz 2026-03-20T19:06:58.542 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79346.log: 84.8% -- replaced with /var/log/ceph/ceph-client.admin.79346.log.gz 2026-03-20T19:06:58.545 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79637.log 2026-03-20T19:06:58.552 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79540.log: 84.8% -- replaced with /var/log/ceph/ceph-client.admin.79540.log.gz 2026-03-20T19:06:58.554 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79734.log 2026-03-20T19:06:58.555 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79637.log: 85.2%gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79831.log 2026-03-20T19:06:58.556 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79734.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.79734.log.gz 2026-03-20T19:06:58.557 INFO:teuthology.orchestra.run.vm00.stderr: -- replaced with /var/log/ceph/ceph-client.admin.79637.log.gz 2026-03-20T19:06:58.561 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.79928.log 2026-03-20T19:06:58.563 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79831.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.80025.log 2026-03-20T19:06:58.563 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.79928.log: 84.9% -- replaced with /var/log/ceph/ceph-client.admin.79831.log.gz 2026-03-20T19:06:58.563 INFO:teuthology.orchestra.run.vm00.stderr: 84.8% -- replaced with /var/log/ceph/ceph-client.admin.79928.log.gz 2026-03-20T19:06:58.569 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.80122.log 2026-03-20T19:06:58.574 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.80025.log: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.80025.log.gz 2026-03-20T19:06:58.583 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.80219.log 2026-03-20T19:06:58.583 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.80122.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.80122.log.gz 2026-03-20T19:06:58.590 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.80316.log 2026-03-20T19:06:58.600 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.80219.log: 85.3% -- replaced with /var/log/ceph/ceph-client.admin.80219.log.gz 2026-03-20T19:06:58.603 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.80413.log 2026-03-20T19:06:58.604 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.80316.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.80316.log.gz 2026-03-20T19:06:58.618 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.80510.log 2026-03-20T19:06:58.618 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.80413.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.80413.log.gz 2026-03-20T19:06:58.625 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.80607.log 2026-03-20T19:06:58.635 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.80510.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.80510.log.gz 2026-03-20T19:06:58.635 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.80704.log 2026-03-20T19:06:58.635 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.80607.log: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.80607.log.gz 2026-03-20T19:06:58.649 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.80801.log 2026-03-20T19:06:58.657 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.80704.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.80898.log 2026-03-20T19:06:58.657 INFO:teuthology.orchestra.run.vm00.stderr: 85.1% -- replaced with /var/log/ceph/ceph-client.admin.80704.log.gz 2026-03-20T19:06:58.664 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.80801.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.320508.log 2026-03-20T19:06:58.667 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.80898.log: 85.2% -- replaced with /var/log/ceph/ceph-client.admin.80801.log.gz 2026-03-20T19:06:58.672 INFO:teuthology.orchestra.run.vm00.stderr: 85.0% -- replaced with /var/log/ceph/ceph-client.admin.80898.log.gz 2026-03-20T19:06:58.673 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.320542.log 2026-03-20T19:06:58.673 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.320508.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.320508.log.gz 2026-03-20T19:06:58.681 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.320576.log 2026-03-20T19:06:58.681 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.320542.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.320542.log.gz 2026-03-20T19:06:58.691 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.320679.log 2026-03-20T19:06:58.691 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.320576.log: 83.0% -- replaced with /var/log/ceph/ceph-client.admin.320576.log.gz 2026-03-20T19:06:58.707 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.320777.log 2026-03-20T19:06:58.707 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.320679.log: 84.8% -- replaced with /var/log/ceph/ceph-client.admin.320679.log.gz 2026-03-20T19:06:58.717 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.320875.log 2026-03-20T19:06:58.724 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.320777.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.320909.log 2026-03-20T19:06:58.724 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.320875.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.320875.log.gz 2026-03-20T19:06:58.733 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.320945.log 2026-03-20T19:06:58.734 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.320909.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.320909.log.gz 2026-03-20T19:06:58.738 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.320979.log 2026-03-20T19:06:58.748 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.320945.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.320945.log.gz 2026-03-20T19:06:58.748 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.0.553958.log 2026-03-20T19:06:58.749 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.320979.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.320979.log.gz 2026-03-20T19:06:58.754 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.0.554133.log 2026-03-20T19:06:58.755 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.553958.log: 8.2% -- replaced with /var/log/ceph/ceph-client.0.553958.log.gz 2026-03-20T19:06:58.765 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.554207.log 2026-03-20T19:06:58.765 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.0.554133.log: 8.2% -- replaced with /var/log/ceph/ceph-client.0.554133.log.gz 2026-03-20T19:06:58.779 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-client.admin.554207.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.554207.log.gz 2026-03-20T19:06:59.037 INFO:teuthology.orchestra.run.vm00.stderr: 86.9% -- replaced with /var/log/ceph/ceph-client.admin.320777.log.gz 2026-03-20T19:07:30.425 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-20T19:07:30.425 INFO:teuthology.orchestra.run.vm00.stderr:gzip: /var/log/ceph/ceph-osd.2.log.gz: No space left on device 2026-03-20T19:07:30.425 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-20T19:07:30.425 INFO:teuthology.orchestra.run.vm00.stderr:gzip: 2026-03-20T19:07:30.425 INFO:teuthology.orchestra.run.vm00.stderr:gzip: 2026-03-20T19:07:30.425 INFO:teuthology.orchestra.run.vm00.stderr:gzip: /var/log/ceph/rgw.ceph.client.0.log.gz: No space left on device 2026-03-20T19:07:30.425 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-osd.3.log.gz: No space left on device 2026-03-20T19:07:30.425 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/ceph-osd.1.log.gz: No space left on device 2026-03-20T19:07:56.160 INFO:teuthology.orchestra.run.vm00.stderr: 93.3% -- replaced with /var/log/ceph/ceph-osd.0.log.gz 2026-03-20T19:07:56.161 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-20T19:07:56.162 INFO:teuthology.orchestra.run.vm00.stderr:real 0m59.269s 2026-03-20T19:07:56.162 INFO:teuthology.orchestra.run.vm00.stderr:user 2m25.074s 2026-03-20T19:07:56.162 INFO:teuthology.orchestra.run.vm00.stderr:sys 0m9.008s 2026-03-20T19:07:56.162 DEBUG:teuthology.orchestra.run:got remote process result: 123 2026-03-20T19:07:56.162 ERROR:teuthology.run_tasks:Manager failed: ceph Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph.py", line 2001, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph.py", line 1181, in cluster yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph.py", line 2011, in task ctx.managers[config['cluster']].wait_for_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 2919, in wait_for_clean num_active_clean = self.get_num_active_clean() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 2698, in get_num_active_clean pgs = self.get_pg_stats() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 2464, in get_pg_stats out = self.raw_cluster_cmd('pg', 'dump', '--format=json') File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 1696, in raw_cluster_cmd return self.run_cluster_cmd(**kwargs).stdout.getvalue() File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph_manager.py", line 1687, in run_cluster_cmd return self.controller.run(**kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph.py", line 1996, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/ceph.py", line 263, in ceph_log run.wait( File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 123: "time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose --" 2026-03-20T19:07:56.162 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-20T19:07:56.165 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/teuthology/teuthology/task/install/__init__.py", line 644, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' 2026-03-20T19:07:56.165 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-20T19:07:56.165 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-20T19:07:56.203 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-20T19:07:56.205 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-20T19:07:56.237 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-20T19:07:56.237 DEBUG:teuthology.orchestra.run.vm00:> 2026-03-20T19:07:56.237 DEBUG:teuthology.orchestra.run.vm00:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-20T19:07:56.237 DEBUG:teuthology.orchestra.run.vm00:> sudo yum -y remove $d || true 2026-03-20T19:07:56.237 DEBUG:teuthology.orchestra.run.vm00:> done 2026-03-20T19:07:56.242 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-20T19:07:56.243 DEBUG:teuthology.orchestra.run.vm02:> 2026-03-20T19:07:56.243 DEBUG:teuthology.orchestra.run.vm02:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-20T19:07:56.243 DEBUG:teuthology.orchestra.run.vm02:> sudo yum -y remove $d || true 2026-03-20T19:07:56.243 DEBUG:teuthology.orchestra.run.vm02:> done 2026-03-20T19:07:56.247 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-20T19:07:56.247 DEBUG:teuthology.orchestra.run.vm05:> 2026-03-20T19:07:56.247 DEBUG:teuthology.orchestra.run.vm05:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-20T19:07:56.247 DEBUG:teuthology.orchestra.run.vm05:> sudo yum -y remove $d || true 2026-03-20T19:07:56.247 DEBUG:teuthology.orchestra.run.vm05:> done 2026-03-20T19:07:56.389 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:56.451 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-20T19:07:56.451 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-20T19:07:56.451 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-20T19:07:56.451 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-20T19:07:56.451 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-20T19:07:56.451 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 103 M 2026-03-20T19:07:56.451 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-20T19:07:56.451 INFO:teuthology.orchestra.run.vm05.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-20T19:07:56.451 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:56.451 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-20T19:07:56.451 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-20T19:07:56.451 INFO:teuthology.orchestra.run.vm05.stdout:Remove 2 Packages 2026-03-20T19:07:56.451 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:56.451 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 103 M 2026-03-20T19:07:56.451 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-20T19:07:56.453 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-20T19:07:56.453 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-20T19:07:56.472 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-20T19:07:56.472 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-20T19:07:56.508 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-20T19:07:56.520 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:56.530 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 1/2 2026-03-20T19:07:56.530 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T19:07:56.530 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-20T19:07:56.530 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-20T19:07:56.530 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-20T19:07:56.530 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:56.537 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 1/2 2026-03-20T19:07:56.551 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 1/2 2026-03-20T19:07:56.565 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-20T19:07:56.612 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-20T19:07:56.613 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-20T19:07:56.613 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-03-20T19:07:56.613 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-20T19:07:56.613 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-03-20T19:07:56.613 INFO:teuthology.orchestra.run.vm00.stdout: ceph-radosgw x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 103 M 2026-03-20T19:07:56.613 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-03-20T19:07:56.613 INFO:teuthology.orchestra.run.vm00.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-20T19:07:56.613 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:56.613 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-20T19:07:56.613 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-20T19:07:56.613 INFO:teuthology.orchestra.run.vm00.stdout:Remove 2 Packages 2026-03-20T19:07:56.613 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:56.613 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 103 M 2026-03-20T19:07:56.613 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-20T19:07:56.618 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-20T19:07:56.618 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-20T19:07:56.638 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-20T19:07:56.638 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-20T19:07:56.644 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:56.651 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-20T19:07:56.651 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 1/2 2026-03-20T19:07:56.678 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-20T19:07:56.695 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-20T19:07:56.695 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:56.695 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-20T19:07:56.695 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-20T19:07:56.695 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:56.695 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-20T19:07:56.703 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 1/2 2026-03-20T19:07:56.703 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T19:07:56.703 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-20T19:07:56.703 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-20T19:07:56.703 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-20T19:07:56.703 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:56.707 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 1/2 2026-03-20T19:07:56.717 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 1/2 2026-03-20T19:07:56.733 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-20T19:07:56.774 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:56.900 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:56.913 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-20T19:07:56.913 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-20T19:07:56.913 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-20T19:07:56.913 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-20T19:07:56.913 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-20T19:07:56.913 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 362 M 2026-03-20T19:07:56.913 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-20T19:07:56.913 INFO:teuthology.orchestra.run.vm05.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-20T19:07:56.913 INFO:teuthology.orchestra.run.vm05.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-20T19:07:56.913 INFO:teuthology.orchestra.run.vm05.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-20T19:07:56.913 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:56.913 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-20T19:07:56.913 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-20T19:07:56.913 INFO:teuthology.orchestra.run.vm05.stdout:Remove 4 Packages 2026-03-20T19:07:56.913 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:56.913 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 364 M 2026-03-20T19:07:56.913 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-20T19:07:56.916 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-20T19:07:56.916 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-20T19:07:56.939 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-20T19:07:56.940 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-20T19:07:57.025 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:57.107 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-20T19:07:57.132 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 1/4 2026-03-20T19:07:57.135 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-20T19:07:57.146 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-20T19:07:57.152 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:57.162 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-20T19:07:57.168 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-20T19:07:57.169 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 1/2 2026-03-20T19:07:57.216 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-20T19:07:57.216 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:57.216 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-03-20T19:07:57.216 INFO:teuthology.orchestra.run.vm00.stdout: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-20T19:07:57.216 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:57.216 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-20T19:07:57.263 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-20T19:07:57.263 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 1/4 2026-03-20T19:07:57.263 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-20T19:07:57.263 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-20T19:07:57.278 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:57.310 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-20T19:07:57.310 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:57.310 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-20T19:07:57.311 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-20T19:07:57.311 INFO:teuthology.orchestra.run.vm05.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-20T19:07:57.311 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:57.311 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-20T19:07:57.405 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:57.414 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-20T19:07:57.415 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-20T19:07:57.415 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-03-20T19:07:57.415 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-20T19:07:57.415 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-03-20T19:07:57.415 INFO:teuthology.orchestra.run.vm00.stdout: ceph-test x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 362 M 2026-03-20T19:07:57.415 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-03-20T19:07:57.415 INFO:teuthology.orchestra.run.vm00.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-20T19:07:57.415 INFO:teuthology.orchestra.run.vm00.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-20T19:07:57.415 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-20T19:07:57.415 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:57.415 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-20T19:07:57.415 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-20T19:07:57.415 INFO:teuthology.orchestra.run.vm00.stdout:Remove 4 Packages 2026-03-20T19:07:57.415 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:57.415 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 364 M 2026-03-20T19:07:57.415 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-20T19:07:57.418 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-20T19:07:57.418 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-20T19:07:57.444 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-20T19:07:57.444 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-20T19:07:57.505 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-20T19:07:57.512 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 1/4 2026-03-20T19:07:57.514 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-20T19:07:57.516 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm05.stdout: ceph x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 0 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 6.8 M 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 19 M 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm05.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm05.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm05.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm05.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm05.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm05.stdout:Remove 8 Packages 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 28 M 2026-03-20T19:07:57.517 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-20T19:07:57.520 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-20T19:07:57.520 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-20T19:07:57.531 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:57.532 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-20T19:07:57.543 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-20T19:07:57.543 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-20T19:07:57.584 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-20T19:07:57.589 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 1/8 2026-03-20T19:07:57.593 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-20T19:07:57.595 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-20T19:07:57.598 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-20T19:07:57.599 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-20T19:07:57.599 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 1/4 2026-03-20T19:07:57.599 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-20T19:07:57.599 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-20T19:07:57.600 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-20T19:07:57.602 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-20T19:07:57.624 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 7/8 2026-03-20T19:07:57.624 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T19:07:57.624 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-20T19:07:57.624 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-20T19:07:57.624 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-20T19:07:57.624 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:57.625 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 7/8 2026-03-20T19:07:57.635 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 7/8 2026-03-20T19:07:57.646 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-20T19:07:57.646 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:57.646 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-03-20T19:07:57.646 INFO:teuthology.orchestra.run.vm00.stdout: ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-20T19:07:57.646 INFO:teuthology.orchestra.run.vm00.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-20T19:07:57.646 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:57.646 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-20T19:07:57.655 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/8 2026-03-20T19:07:57.655 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T19:07:57.655 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-20T19:07:57.655 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-20T19:07:57.655 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-20T19:07:57.655 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:57.657 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/8 2026-03-20T19:07:57.658 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:57.749 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/8 2026-03-20T19:07:57.749 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 1/8 2026-03-20T19:07:57.749 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 2/8 2026-03-20T19:07:57.749 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 3/8 2026-03-20T19:07:57.749 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-20T19:07:57.749 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-20T19:07:57.749 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-20T19:07:57.749 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-20T19:07:57.785 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:57.796 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-20T19:07:57.796 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:57.796 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-20T19:07:57.796 INFO:teuthology.orchestra.run.vm05.stdout: ceph-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T19:07:57.796 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T19:07:57.796 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T19:07:57.796 INFO:teuthology.orchestra.run.vm05.stdout: lua-5.4.4-4.el9.x86_64 2026-03-20T19:07:57.796 INFO:teuthology.orchestra.run.vm05.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-20T19:07:57.796 INFO:teuthology.orchestra.run.vm05.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-20T19:07:57.796 INFO:teuthology.orchestra.run.vm05.stdout: unzip-6.0-59.el9.x86_64 2026-03-20T19:07:57.796 INFO:teuthology.orchestra.run.vm05.stdout: zip-3.0-35.el9.x86_64 2026-03-20T19:07:57.796 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:57.796 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-20T19:07:57.843 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-20T19:07:57.844 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-20T19:07:57.844 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-03-20T19:07:57.844 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-20T19:07:57.844 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-03-20T19:07:57.844 INFO:teuthology.orchestra.run.vm00.stdout: ceph x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 0 2026-03-20T19:07:57.844 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-03-20T19:07:57.844 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 6.8 M 2026-03-20T19:07:57.844 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 19 M 2026-03-20T19:07:57.844 INFO:teuthology.orchestra.run.vm00.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-20T19:07:57.844 INFO:teuthology.orchestra.run.vm00.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-20T19:07:57.844 INFO:teuthology.orchestra.run.vm00.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-20T19:07:57.844 INFO:teuthology.orchestra.run.vm00.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-20T19:07:57.844 INFO:teuthology.orchestra.run.vm00.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-20T19:07:57.844 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:57.844 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-20T19:07:57.844 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-20T19:07:57.844 INFO:teuthology.orchestra.run.vm00.stdout:Remove 8 Packages 2026-03-20T19:07:57.844 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:57.844 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 28 M 2026-03-20T19:07:57.844 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-20T19:07:57.847 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-20T19:07:57.847 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-20T19:07:57.875 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-20T19:07:57.876 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-20T19:07:57.911 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:57.920 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-20T19:07:57.926 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 1/8 2026-03-20T19:07:57.929 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-20T19:07:57.931 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-20T19:07:57.935 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-20T19:07:57.938 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-20T19:07:57.940 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-20T19:07:57.960 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 7/8 2026-03-20T19:07:57.960 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T19:07:57.960 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-20T19:07:57.960 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-20T19:07:57.960 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-20T19:07:57.960 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:57.960 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 7/8 2026-03-20T19:07:57.967 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 7/8 2026-03-20T19:07:57.986 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/8 2026-03-20T19:07:57.987 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T19:07:57.987 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-20T19:07:57.987 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-20T19:07:57.987 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-20T19:07:57.987 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:57.988 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/8 2026-03-20T19:07:58.004 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout:=========================================================================================== 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout:=========================================================================================== 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 24 M 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout:Removing dependent packages: 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 447 k 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 2.9 M 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 940 k 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 140 M 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 66 M 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 567 k 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: ceph-osd x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 54 M 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: ceph-volume noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 1.4 M 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 11 M 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 98 M 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 996 k 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 1.6 M 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 59 k 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 138 k 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 409 k 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1 x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 792 k 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-20T19:07:58.010 INFO:teuthology.orchestra.run.vm05.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 855 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyparsing noarch 2.4.7-9.el9 @baseos 635 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-20T19:07:58.011 INFO:teuthology.orchestra.run.vm05.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-20T19:07:58.012 INFO:teuthology.orchestra.run.vm05.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-20T19:07:58.012 INFO:teuthology.orchestra.run.vm05.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-20T19:07:58.012 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:58.012 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-20T19:07:58.012 INFO:teuthology.orchestra.run.vm05.stdout:=========================================================================================== 2026-03-20T19:07:58.012 INFO:teuthology.orchestra.run.vm05.stdout:Remove 98 Packages 2026-03-20T19:07:58.012 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:58.012 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 666 M 2026-03-20T19:07:58.012 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-20T19:07:58.035 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:58.037 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-20T19:07:58.037 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-20T19:07:58.066 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/8 2026-03-20T19:07:58.066 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 1/8 2026-03-20T19:07:58.067 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 2/8 2026-03-20T19:07:58.067 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 3/8 2026-03-20T19:07:58.067 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-20T19:07:58.067 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-20T19:07:58.067 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-20T19:07:58.067 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-20T19:07:58.112 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-20T19:07:58.112 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:58.112 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-03-20T19:07:58.112 INFO:teuthology.orchestra.run.vm00.stdout: ceph-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T19:07:58.112 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T19:07:58.112 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T19:07:58.112 INFO:teuthology.orchestra.run.vm00.stdout: lua-5.4.4-4.el9.x86_64 2026-03-20T19:07:58.112 INFO:teuthology.orchestra.run.vm00.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-20T19:07:58.112 INFO:teuthology.orchestra.run.vm00.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-20T19:07:58.112 INFO:teuthology.orchestra.run.vm00.stdout: unzip-6.0-59.el9.x86_64 2026-03-20T19:07:58.112 INFO:teuthology.orchestra.run.vm00.stdout: zip-3.0-35.el9.x86_64 2026-03-20T19:07:58.112 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:58.112 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-20T19:07:58.144 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-20T19:07:58.145 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-20T19:07:58.160 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:58.277 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:58.284 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-20T19:07:58.284 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 1/98 2026-03-20T19:07:58.292 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 1/98 2026-03-20T19:07:58.303 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout:=========================================================================================== 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout:=========================================================================================== 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 24 M 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout:Removing dependent packages: 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: ceph-immutable-object-cache x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 447 k 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 2.9 M 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 940 k 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 140 M 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-diskprediction-local noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 66 M 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-rook noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 567 k 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: ceph-osd x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 54 M 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: ceph-volume noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 1.4 M 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: rbd-mirror x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 11 M 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 98 M 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: ceph-grafana-dashboards noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 996 k 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 1.6 M 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: ceph-prometheus-alerts noarch 2:20.2.0-712.g70f8415b.el9 @ceph-noarch 59 k 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: ceph-selinux x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 138 k 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: libcephsqlite x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 409 k 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-20T19:07:58.309 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 792 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 @ceph 855 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyparsing noarch 2.4.7-9.el9 @baseos 635 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-20T19:07:58.310 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout:=========================================================================================== 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout:Remove 98 Packages 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 666 M 2026-03-20T19:07:58.311 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-20T19:07:58.312 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2/98 2026-03-20T19:07:58.312 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T19:07:58.312 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-20T19:07:58.312 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-20T19:07:58.312 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-20T19:07:58.312 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:58.312 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2/98 2026-03-20T19:07:58.326 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2/98 2026-03-20T19:07:58.335 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-20T19:07:58.335 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-20T19:07:58.387 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9.n 3/98 2026-03-20T19:07:58.388 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.noar 4/98 2026-03-20T19:07:58.393 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:58.448 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-20T19:07:58.448 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-20T19:07:58.449 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.noar 4/98 2026-03-20T19:07:58.458 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/98 2026-03-20T19:07:58.464 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/98 2026-03-20T19:07:58.464 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noarch 7/98 2026-03-20T19:07:58.477 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noarch 7/98 2026-03-20T19:07:58.484 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/98 2026-03-20T19:07:58.488 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/98 2026-03-20T19:07:58.498 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/98 2026-03-20T19:07:58.502 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/98 2026-03-20T19:07:58.512 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:58.522 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 12/98 2026-03-20T19:07:58.522 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T19:07:58.522 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-20T19:07:58.522 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-20T19:07:58.522 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-20T19:07:58.522 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:58.528 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 12/98 2026-03-20T19:07:58.537 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 12/98 2026-03-20T19:07:58.555 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 13/98 2026-03-20T19:07:58.556 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T19:07:58.556 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-20T19:07:58.556 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:58.564 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 13/98 2026-03-20T19:07:58.573 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 13/98 2026-03-20T19:07:58.576 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/98 2026-03-20T19:07:58.581 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/98 2026-03-20T19:07:58.586 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/98 2026-03-20T19:07:58.595 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/98 2026-03-20T19:07:58.599 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/98 2026-03-20T19:07:58.604 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-20T19:07:58.604 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 1/98 2026-03-20T19:07:58.609 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 19/98 2026-03-20T19:07:58.612 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 1/98 2026-03-20T19:07:58.616 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 20/98 2026-03-20T19:07:58.630 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2/98 2026-03-20T19:07:58.630 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T19:07:58.630 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-20T19:07:58.630 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-20T19:07:58.630 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-20T19:07:58.630 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:58.631 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2/98 2026-03-20T19:07:58.636 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:58.644 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2/98 2026-03-20T19:07:58.645 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 21/98 2026-03-20T19:07:58.652 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 22/98 2026-03-20T19:07:58.655 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 23/98 2026-03-20T19:07:58.664 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 24/98 2026-03-20T19:07:58.678 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 25/98 2026-03-20T19:07:58.679 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f841 26/98 2026-03-20T19:07:58.688 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f841 26/98 2026-03-20T19:07:58.699 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9.n 3/98 2026-03-20T19:07:58.699 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.noar 4/98 2026-03-20T19:07:58.752 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:58.760 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.noar 4/98 2026-03-20T19:07:58.770 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/98 2026-03-20T19:07:58.775 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/98 2026-03-20T19:07:58.775 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noarch 7/98 2026-03-20T19:07:58.782 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 27/98 2026-03-20T19:07:58.787 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noarch 7/98 2026-03-20T19:07:58.794 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/98 2026-03-20T19:07:58.798 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 28/98 2026-03-20T19:07:58.799 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/98 2026-03-20T19:07:58.807 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/98 2026-03-20T19:07:58.811 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/98 2026-03-20T19:07:58.813 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 29/98 2026-03-20T19:07:58.813 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-20T19:07:58.813 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:58.815 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 29/98 2026-03-20T19:07:58.831 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 12/98 2026-03-20T19:07:58.831 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T19:07:58.831 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-20T19:07:58.831 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-20T19:07:58.831 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-20T19:07:58.831 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:58.834 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 12/98 2026-03-20T19:07:58.842 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 29/98 2026-03-20T19:07:58.842 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 12/98 2026-03-20T19:07:58.857 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 13/98 2026-03-20T19:07:58.857 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T19:07:58.857 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-20T19:07:58.857 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:58.858 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 30/98 2026-03-20T19:07:58.864 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 31/98 2026-03-20T19:07:58.865 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 13/98 2026-03-20T19:07:58.867 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 32/98 2026-03-20T19:07:58.869 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 33/98 2026-03-20T19:07:58.869 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:58.875 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 13/98 2026-03-20T19:07:58.877 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/98 2026-03-20T19:07:58.882 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/98 2026-03-20T19:07:58.887 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/98 2026-03-20T19:07:58.895 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 34/98 2026-03-20T19:07:58.895 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T19:07:58.895 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-20T19:07:58.895 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-20T19:07:58.895 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-20T19:07:58.895 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:58.896 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/98 2026-03-20T19:07:58.897 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 34/98 2026-03-20T19:07:58.901 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/98 2026-03-20T19:07:58.906 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 34/98 2026-03-20T19:07:58.909 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 35/98 2026-03-20T19:07:58.911 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 19/98 2026-03-20T19:07:58.912 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 36/98 2026-03-20T19:07:58.914 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 37/98 2026-03-20T19:07:58.917 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 38/98 2026-03-20T19:07:58.917 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 20/98 2026-03-20T19:07:58.921 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 39/98 2026-03-20T19:07:58.925 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 40/98 2026-03-20T19:07:58.930 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 41/98 2026-03-20T19:07:58.947 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 21/98 2026-03-20T19:07:58.954 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 22/98 2026-03-20T19:07:58.957 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 23/98 2026-03-20T19:07:58.967 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 24/98 2026-03-20T19:07:58.974 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 25/98 2026-03-20T19:07:58.974 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f841 26/98 2026-03-20T19:07:58.979 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 42/98 2026-03-20T19:07:58.981 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f841 26/98 2026-03-20T19:07:58.987 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:58.990 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 43/98 2026-03-20T19:07:58.993 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 44/98 2026-03-20T19:07:58.998 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 45/98 2026-03-20T19:07:59.000 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 46/98 2026-03-20T19:07:59.004 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 47/98 2026-03-20T19:07:59.007 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 48/98 2026-03-20T19:07:59.029 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-712.g70f8415b 49/98 2026-03-20T19:07:59.029 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T19:07:59.029 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-20T19:07:59.029 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:59.029 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-immutable-object-cache-2:20.2.0-712.g70f8415b 49/98 2026-03-20T19:07:59.039 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-712.g70f8415b 49/98 2026-03-20T19:07:59.041 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 50/98 2026-03-20T19:07:59.043 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 51/98 2026-03-20T19:07:59.046 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-ply-3.11-14.el9.noarch 52/98 2026-03-20T19:07:59.048 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 53/98 2026-03-20T19:07:59.051 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 54/98 2026-03-20T19:07:59.053 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 55/98 2026-03-20T19:07:59.057 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 56/98 2026-03-20T19:07:59.060 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 57/98 2026-03-20T19:07:59.063 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pyparsing-2.4.7-9.el9.noarch 58/98 2026-03-20T19:07:59.071 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/98 2026-03-20T19:07:59.074 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/98 2026-03-20T19:07:59.077 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/98 2026-03-20T19:07:59.077 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 27/98 2026-03-20T19:07:59.080 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/98 2026-03-20T19:07:59.083 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/98 2026-03-20T19:07:59.088 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/98 2026-03-20T19:07:59.093 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/98 2026-03-20T19:07:59.094 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 28/98 2026-03-20T19:07:59.099 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 66/98 2026-03-20T19:07:59.103 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 67/98 2026-03-20T19:07:59.104 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:59.105 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 68/98 2026-03-20T19:07:59.107 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 29/98 2026-03-20T19:07:59.107 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-20T19:07:59.107 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:59.108 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 29/98 2026-03-20T19:07:59.112 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 69/98 2026-03-20T19:07:59.115 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 70/98 2026-03-20T19:07:59.118 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 71/98 2026-03-20T19:07:59.128 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 72/98 2026-03-20T19:07:59.133 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 73/98 2026-03-20T19:07:59.136 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 74/98 2026-03-20T19:07:59.139 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 75/98 2026-03-20T19:07:59.141 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.el9 76/98 2026-03-20T19:07:59.141 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 29/98 2026-03-20T19:07:59.142 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el9. 77/98 2026-03-20T19:07:59.157 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 30/98 2026-03-20T19:07:59.162 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 31/98 2026-03-20T19:07:59.162 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 78/98 2026-03-20T19:07:59.162 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-20T19:07:59.162 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-20T19:07:59.163 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:59.164 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 32/98 2026-03-20T19:07:59.167 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 33/98 2026-03-20T19:07:59.170 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 78/98 2026-03-20T19:07:59.170 INFO:teuthology.orchestra.run.vm05.stdout:warning: file /etc/logrotate.d/ceph: remove failed: No such file or directory 2026-03-20T19:07:59.170 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-20T19:07:59.185 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 34/98 2026-03-20T19:07:59.185 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T19:07:59.185 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-20T19:07:59.185 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-20T19:07:59.185 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-20T19:07:59.185 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:59.186 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 34/98 2026-03-20T19:07:59.194 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 34/98 2026-03-20T19:07:59.198 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 35/98 2026-03-20T19:07:59.200 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 36/98 2026-03-20T19:07:59.203 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 37/98 2026-03-20T19:07:59.205 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 38/98 2026-03-20T19:07:59.206 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 78/98 2026-03-20T19:07:59.206 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 79/98 2026-03-20T19:07:59.209 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 39/98 2026-03-20T19:07:59.213 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 40/98 2026-03-20T19:07:59.217 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 41/98 2026-03-20T19:07:59.222 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 79/98 2026-03-20T19:07:59.226 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:59.228 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 80/98 2026-03-20T19:07:59.231 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x86 81/98 2026-03-20T19:07:59.233 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 82/98 2026-03-20T19:07:59.233 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 83/98 2026-03-20T19:07:59.267 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 42/98 2026-03-20T19:07:59.278 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 43/98 2026-03-20T19:07:59.281 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 44/98 2026-03-20T19:07:59.284 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 45/98 2026-03-20T19:07:59.286 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 46/98 2026-03-20T19:07:59.289 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 47/98 2026-03-20T19:07:59.292 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 48/98 2026-03-20T19:07:59.310 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-712.g70f8415b 49/98 2026-03-20T19:07:59.310 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T19:07:59.310 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-20T19:07:59.310 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:59.310 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-immutable-object-cache-2:20.2.0-712.g70f8415b 49/98 2026-03-20T19:07:59.318 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-712.g70f8415b 49/98 2026-03-20T19:07:59.319 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 50/98 2026-03-20T19:07:59.321 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 51/98 2026-03-20T19:07:59.324 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-ply-3.11-14.el9.noarch 52/98 2026-03-20T19:07:59.327 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 53/98 2026-03-20T19:07:59.329 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 54/98 2026-03-20T19:07:59.332 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 55/98 2026-03-20T19:07:59.334 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 56/98 2026-03-20T19:07:59.337 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 57/98 2026-03-20T19:07:59.339 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pyparsing-2.4.7-9.el9.noarch 58/98 2026-03-20T19:07:59.343 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:59.347 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/98 2026-03-20T19:07:59.351 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/98 2026-03-20T19:07:59.353 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/98 2026-03-20T19:07:59.355 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/98 2026-03-20T19:07:59.358 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/98 2026-03-20T19:07:59.361 DEBUG:teuthology.orchestra.run.vm02:> sudo yum clean all 2026-03-20T19:07:59.364 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/98 2026-03-20T19:07:59.368 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/98 2026-03-20T19:07:59.374 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 66/98 2026-03-20T19:07:59.377 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 67/98 2026-03-20T19:07:59.379 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 68/98 2026-03-20T19:07:59.384 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 69/98 2026-03-20T19:07:59.388 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 70/98 2026-03-20T19:07:59.391 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 71/98 2026-03-20T19:07:59.398 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 72/98 2026-03-20T19:07:59.404 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 73/98 2026-03-20T19:07:59.407 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 74/98 2026-03-20T19:07:59.409 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 75/98 2026-03-20T19:07:59.410 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.el9 76/98 2026-03-20T19:07:59.412 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el9. 77/98 2026-03-20T19:07:59.430 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 78/98 2026-03-20T19:07:59.430 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-20T19:07:59.430 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-20T19:07:59.430 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:59.437 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 78/98 2026-03-20T19:07:59.437 INFO:teuthology.orchestra.run.vm00.stdout:warning: file /etc/logrotate.d/ceph: remove failed: No such file or directory 2026-03-20T19:07:59.437 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T19:07:59.462 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 78/98 2026-03-20T19:07:59.462 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 79/98 2026-03-20T19:07:59.472 INFO:teuthology.orchestra.run.vm02.stderr:[Errno 28] No space left on device: '/var/cache/dnf/metadata_lock.pid' 2026-03-20T19:07:59.489 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T19:07:59.489 ERROR:teuthology.run_tasks:Manager failed: install Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/task/install/__init__.py", line 220, in install yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/teuthology/teuthology/task/install/__init__.py", line 644, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/teuthology/teuthology/task/install/__init__.py", line 640, in task with contextutil.nested( File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/teuthology/teuthology/task/install/__init__.py", line 222, in install remove_packages(ctx, config, package_list) File "/home/teuthos/teuthology/teuthology/task/install/__init__.py", line 103, in remove_packages with parallel() as p: File "/home/teuthos/teuthology/teuthology/parallel.py", line 84, in __exit__ for result in self: File "/home/teuthos/teuthology/teuthology/parallel.py", line 98, in __next__ resurrect_traceback(result) File "/home/teuthos/teuthology/teuthology/parallel.py", line 30, in resurrect_traceback raise exc.exc_info[1] File "/home/teuthos/teuthology/teuthology/parallel.py", line 23, in capture_traceback return func(*args, **kwargs) File "/home/teuthos/teuthology/teuthology/task/install/rpm.py", line 43, in _remove remote.run(args='sudo yum clean all') File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm02 with status 1: 'sudo yum clean all' 2026-03-20T19:07:59.490 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-20T19:07:59.492 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-20T19:07:59.492 DEBUG:teuthology.orchestra.run.vm00:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-20T19:07:59.494 DEBUG:teuthology.orchestra.run.vm02:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-20T19:07:59.495 DEBUG:teuthology.orchestra.run.vm05:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-20T19:07:59.507 INFO:teuthology.orchestra.run.vm02.stderr:bash: line 1: ntpq: command not found 2026-03-20T19:07:59.509 INFO:teuthology.orchestra.run.vm00.stderr:bash: line 1: ntpq: command not found 2026-03-20T19:07:59.511 INFO:teuthology.orchestra.run.vm05.stderr:bash: line 1: ntpq: command not found 2026-03-20T19:07:59.576 INFO:teuthology.orchestra.run.vm05.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-20T19:07:59.576 INFO:teuthology.orchestra.run.vm05.stdout:=============================================================================== 2026-03-20T19:07:59.576 INFO:teuthology.orchestra.run.vm05.stdout:^+ 104-167-24-26.lunoxia.fc> 2 8 377 218 +2786us[+2780us] +/- 57ms 2026-03-20T19:07:59.576 INFO:teuthology.orchestra.run.vm05.stdout:^+ ntp.kernfusion.at 2 8 377 86 -3204us[-3216us] +/- 29ms 2026-03-20T19:07:59.577 INFO:teuthology.orchestra.run.vm05.stdout:^* ns1.blazing.de 3 6 377 18 +85us[ +82us] +/- 17ms 2026-03-20T19:07:59.577 INFO:teuthology.orchestra.run.vm05.stdout:^+ ntp1.doc-cirrus.com 3 6 377 21 +262us[ +259us] +/- 20ms 2026-03-20T19:07:59.577 INFO:teuthology.orchestra.run.vm02.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-20T19:07:59.577 INFO:teuthology.orchestra.run.vm02.stdout:=============================================================================== 2026-03-20T19:07:59.577 INFO:teuthology.orchestra.run.vm02.stdout:^+ 104-167-24-26.lunoxia.fc> 2 8 377 475 +2758us[+2572us] +/- 53ms 2026-03-20T19:07:59.577 INFO:teuthology.orchestra.run.vm02.stdout:^+ ntp.kernfusion.at 2 8 377 146 -2987us[-2987us] +/- 28ms 2026-03-20T19:07:59.577 INFO:teuthology.orchestra.run.vm02.stdout:^* ns1.blazing.de 3 8 377 216 +108us[ +34us] +/- 17ms 2026-03-20T19:07:59.577 INFO:teuthology.orchestra.run.vm02.stdout:^+ ntp1.doc-cirrus.com 3 7 377 81 +202us[ +202us] +/- 20ms 2026-03-20T19:07:59.577 INFO:teuthology.orchestra.run.vm00.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-20T19:07:59.577 INFO:teuthology.orchestra.run.vm00.stdout:=============================================================================== 2026-03-20T19:07:59.577 INFO:teuthology.orchestra.run.vm00.stdout:^+ ntp.kernfusion.at 2 8 377 154 -1948us[-1943us] +/- 30ms 2026-03-20T19:07:59.577 INFO:teuthology.orchestra.run.vm00.stdout:^* ns1.blazing.de 3 6 377 22 -13us[ -14us] +/- 17ms 2026-03-20T19:07:59.577 INFO:teuthology.orchestra.run.vm00.stdout:^+ ntp1.doc-cirrus.com 3 7 377 25 +269us[ +269us] +/- 20ms 2026-03-20T19:07:59.577 INFO:teuthology.orchestra.run.vm00.stdout:^+ 104-167-24-26.lunoxia.fc> 2 8 377 215 +2356us[+2348us] +/- 58ms 2026-03-20T19:07:59.577 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-20T19:07:59.580 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-20T19:07:59.580 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-20T19:07:59.583 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-20T19:07:59.585 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-20T19:07:59.588 INFO:teuthology.task.internal:Duration was 2740.714069 seconds 2026-03-20T19:07:59.588 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-20T19:07:59.590 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-20T19:07:59.590 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-20T19:07:59.619 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-20T19:07:59.620 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-20T19:07:59.655 INFO:teuthology.orchestra.run.vm02.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-20T19:07:59.656 INFO:teuthology.orchestra.run.vm00.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-20T19:07:59.663 INFO:teuthology.orchestra.run.vm05.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-20T19:08:00.165 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-20T19:08:00.165 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm00.local 2026-03-20T19:08:00.165 DEBUG:teuthology.orchestra.run.vm00:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-20T19:08:00.184 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm02.local 2026-03-20T19:08:00.185 DEBUG:teuthology.orchestra.run.vm02:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-20T19:08:00.224 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm05.local 2026-03-20T19:08:00.224 DEBUG:teuthology.orchestra.run.vm05:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-20T19:08:00.245 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-20T19:08:00.245 DEBUG:teuthology.orchestra.run.vm00:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-20T19:08:00.247 DEBUG:teuthology.orchestra.run.vm02:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-20T19:08:00.265 DEBUG:teuthology.orchestra.run.vm05:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-20T19:08:00.279 INFO:teuthology.orchestra.run.vm02.stderr:bash: line 1: /home/ubuntu/cephtest/archive/syslog/journalctl.log: No space left on device 2026-03-20T19:08:00.442 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 79/98 2026-03-20T19:08:00.447 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 80/98 2026-03-20T19:08:00.450 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x86 81/98 2026-03-20T19:08:00.452 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 82/98 2026-03-20T19:08:00.452 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 83/98 2026-03-20T19:08:00.669 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T19:08:00.669 ERROR:teuthology.run_tasks:Manager failed: internal.syslog Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/task/internal/syslog.py", line 76, in syslog yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/teuthology/teuthology/task/internal/syslog.py", line 163, in syslog run.wait( File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm02 with status 1: 'sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log' 2026-03-20T19:08:00.670 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-20T19:08:00.672 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-20T19:08:00.672 DEBUG:teuthology.orchestra.run.vm00:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-20T19:08:00.696 DEBUG:teuthology.orchestra.run.vm02:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-20T19:08:00.720 DEBUG:teuthology.orchestra.run.vm05:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-20T19:08:00.747 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-20T19:08:00.750 DEBUG:teuthology.orchestra.run.vm00:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-20T19:08:00.752 DEBUG:teuthology.orchestra.run.vm02:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-20T19:08:00.762 DEBUG:teuthology.orchestra.run.vm05:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-20T19:08:00.774 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = core 2026-03-20T19:08:00.784 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern = core 2026-03-20T19:08:00.811 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = core 2026-03-20T19:08:00.823 DEBUG:teuthology.orchestra.run.vm00:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-20T19:08:00.843 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T19:08:00.843 DEBUG:teuthology.orchestra.run.vm02:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-20T19:08:00.856 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T19:08:00.856 DEBUG:teuthology.orchestra.run.vm05:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-20T19:08:00.876 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T19:08:00.876 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-20T19:08:00.879 INFO:teuthology.task.internal:Transferring archived files... 2026-03-20T19:08:00.879 DEBUG:teuthology.misc:Transferring archived files from vm00:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-20_18:10:20-rgw-tentacle-none-default-vps/2719/remote/vm00 2026-03-20T19:08:00.879 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-20T19:08:01.084 DEBUG:teuthology.misc:Transferring archived files from vm02:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-20_18:10:20-rgw-tentacle-none-default-vps/2719/remote/vm02 2026-03-20T19:08:01.084 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-20T19:08:01.108 DEBUG:teuthology.misc:Transferring archived files from vm05:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-20_18:10:20-rgw-tentacle-none-default-vps/2719/remote/vm05 2026-03-20T19:08:01.108 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-20T19:08:01.269 INFO:teuthology.task.internal:Removing archive directory... 2026-03-20T19:08:01.269 DEBUG:teuthology.orchestra.run.vm00:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-20T19:08:01.273 DEBUG:teuthology.orchestra.run.vm02:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-20T19:08:01.274 DEBUG:teuthology.orchestra.run.vm05:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-20T19:08:01.322 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-20T19:08:01.325 INFO:teuthology.task.internal:Not uploading archives. 2026-03-20T19:08:01.325 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-20T19:08:01.328 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-20T19:08:01.328 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-20T19:08:01.331 DEBUG:teuthology.orchestra.run.vm02:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-20T19:08:01.332 DEBUG:teuthology.orchestra.run.vm05:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-20T19:08:01.345 INFO:teuthology.orchestra.run.vm00.stdout: 8532144 0 drwxr-xr-x 3 ubuntu ubuntu 23 Mar 20 19:08 /home/ubuntu/cephtest 2026-03-20T19:08:01.345 INFO:teuthology.orchestra.run.vm00.stdout: 12989198 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 20 18:26 /home/ubuntu/cephtest/ceph.data 2026-03-20T19:08:01.346 INFO:teuthology.orchestra.run.vm00.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-20T19:08:01.347 INFO:teuthology.orchestra.run.vm02.stdout: 8532144 0 drwxr-xr-x 3 ubuntu ubuntu 76 Mar 20 19:08 /home/ubuntu/cephtest 2026-03-20T19:08:01.347 INFO:teuthology.orchestra.run.vm02.stdout: 12989134 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 20 18:26 /home/ubuntu/cephtest/ceph.data 2026-03-20T19:08:01.347 INFO:teuthology.orchestra.run.vm02.stdout: 8532147 4 -rw-r--r-- 1 ceph root 20 Mar 20 18:26 /home/ubuntu/cephtest/url_file 2026-03-20T19:08:01.347 INFO:teuthology.orchestra.run.vm02.stdout: 8532150 0 srwxr-xr-x 1 root root 0 Mar 20 18:26 /home/ubuntu/cephtest/rgw.opslog.ceph.client.1.sock 2026-03-20T19:08:01.348 INFO:teuthology.orchestra.run.vm02.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-20T19:08:01.364 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T19:08:01.364 ERROR:teuthology.run_tasks:Manager failed: internal.base Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 48, in base yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 552, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 364, in create_pools yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/rgw.py", line 269, in start_rgw rgwadmin(ctx, client, cmd=['gc', 'process', '--include-all'], check_status=True) File "/home/teuthos/src/github.com_kshtsk_ceph_938e12e80b676435f28993327ab6082a0d57e922/qa/tasks/util/rgw.py", line 34, in rgwadmin proc = remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 53, in base run.wait( File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm00 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 2026-03-20T19:08:01.364 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-20T19:08:01.367 DEBUG:teuthology.run_tasks:Exception was not quenched, exiting: CommandFailedError: Command failed on vm00 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all' 2026-03-20T19:08:01.368 INFO:teuthology.run:Summary data: description: rgw/dedup/{beast bluestore-bitmap fixed-3-rgw ignore-pg-availability overrides supported-distros/{centos_latest} tasks/{0-install test_dedup}} duration: 2740.714068889618 failure_reason: 'Command failed on vm00 with status 1: ''adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid foo.client.0 --purge-data --cluster ceph''' flavor: default owner: kyr sentry_event: null status: fail success: false 2026-03-20T19:08:01.368 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-20T19:08:01.377 INFO:teuthology.orchestra.run.vm05.stdout: 8532146 0 drwxr-xr-x 3 ubuntu ubuntu 95 Mar 20 19:08 /home/ubuntu/cephtest 2026-03-20T19:08:01.377 INFO:teuthology.orchestra.run.vm05.stdout: 12988622 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 20 18:26 /home/ubuntu/cephtest/ceph.data 2026-03-20T19:08:01.378 INFO:teuthology.orchestra.run.vm05.stdout: 8532148 4 -rw-r--r-- 1 ubuntu ubuntu 409 Mar 20 18:26 /home/ubuntu/cephtest/ceph.monmap 2026-03-20T19:08:01.378 INFO:teuthology.orchestra.run.vm05.stdout: 8532152 4 -rw-r--r-- 1 ceph root 20 Mar 20 18:26 /home/ubuntu/cephtest/url_file 2026-03-20T19:08:01.378 INFO:teuthology.orchestra.run.vm05.stdout: 8531707 0 srwxr-xr-x 1 root root 0 Mar 20 18:26 /home/ubuntu/cephtest/rgw.opslog.ceph.client.2.sock 2026-03-20T19:08:01.378 INFO:teuthology.orchestra.run.vm05.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-20T19:08:01.393 INFO:teuthology.run:FAIL